AI working for the people or against the people

AI and technology can be used for good or bad. Currently some AI use cases involve monitoring the population and invading privacy. For example, look at this glimpse into China’s social credit system, an Orwellian nightmare:

Another big use case is addicting people to their devices through design to sell more advertising and capture their valuable attention (what Albert Wenger calls the new scarcity). We’ve posted links to podcasts by Tristan Harris before talking about the arms race in technology to capture your attention. This slide from the Internet Trends presentation shows how people are using their devices more and more:

screen-shot-2018-05-31-at-10-09-37
Source: Mary Meeker, Internet Trends

Here are some interesting video/links on flipping AI and technology on its head to serve the interests of the people. Why not monitor the government for corruption, in service of individuals? Why not create an AI sidekick or bot for individuals that monitors their weaknesses and does tasks like purchase the cheapest possible item. These seem like much better projects than some of the current ones…..

https://www.wired.com/story/artificial-intelligence-yuval-noah-harari-tristan-harris/
https://video.wired.com/watch/yuval-harari-tristan-harris-humans-get-hacked (video)
“We can use the technology in many different ways. I mean for example we now are using AI mainly in order to surveil individuals in the service of corporations and governments. But it can be flipped to the opposite direction. We can use the same surveillance systems to control the government in the service of individuals, to monitor, for example, government officials that they are not corrupt. The technology is willing to do that. The question is whether we’re willing to develop the necessary tools to do it.”

“YNH: The system in itself can do amazing things for us. We just need to turn it around, that it serves our interests, whatever that is and not the interests of the corporation or the government. OK, now that we realize that our brains can be hacked, we need an antivirus for the brain, just as we have one for the computer. And it can work on the basis of the same technology. Let’s say you have an AI sidekick who monitors you all the time, 24 hours a day. What do you write? What do you see? Everything. But this AI is serving you as this fiduciary responsibility. And it gets to know your weaknesses, and by knowing your weaknesses it can protect you against other agents trying to hack you and to exploit your weaknesses.”

“So we think about this is like the whole framework of humane technology is we think this is the thing: We have to hold up the mirror to ourselves to understand our vulnerabilities first. And you design starting from a view of what we’re vulnerable to. I think from a practical perspective I totally agree with this idea of an AI sidekick but if we’re imagining like we live in the reality, the scary reality that we’re talking about right now. It’s not like this is some sci-fi future. This is the actual state. So we’re actually thinking about how do we navigate to an actual state of affairs that we want, we probably don’t want an AI sidekick to be this kind of optional thing that some people who are rich can afford and other people who don’t can’t, we probably want it to be baked into the way technology works in the first place, so that it does have a fiduciary responsibility to our best, subtle, compassionate, vulnerable interests.

NT: So we will have government sponsored AI sidekicks? We will have corporations that sell us AI sidekicks but subsidize them, so it’s not just the affluent that have really good AI sidekicks?”

https://www.wired.com/story/artificial-intelligence-yuval-noah-harari-tristan-harris/

World After Capital By Albert Wenger – bots to serve the people

AI odds and ends:

https://hbr.org/2018/09/a-blueprint-for-a-better-digital-society
“But there is an alternative: an emerging class of business models in which internet users are also the customers and the sellers. Data creators directly trade on the value of their data in an information-centric future economy. Direct buying and selling of information-based value between primary parties could replace the selling of surveillance and persuasion to third parties. Platforms would not shrivel in this economy; rather, they would thrive and grow dramatically, although their profit margins would likely fall as more value was returned to data creators. Most important, a market for data would restore dignity to data creators, who would become central to a dignified information economy.”

Syllabus for Glen Weyl’s very interesting course:
https://www.dropbox.com/s/fjzbpaoiq545s55/Syllabus.pdf?dl=0

AI winter update: I’ve posted quite a few optimistic videos about autonomous driving so i’ll even it out with this reality check. Some great links in this piece.
https://blog.piekniewski.info/2018/10/29/ai-winter-update/

“While on the self-driving car subject, one of the main criticisms of my original AI winter post was that I omitted Waymo from my discussion, them being the unquestionable leader in autonomy. This criticism was a bit unjustified in that I did include and discussed Waymo extensively in my other posts [1], but in these circumstances it appears prudent to mention what is going on there. Luckily a recent very good piece of investigative journalism shines some light on the matter. Apparently Waymo cars tested in Phoenix area had trouble with the most basic driving situations such as merging onto a freeway or making a left turn, [1]. The piece worth citing from the article:

‘There are times when it seems “autonomy is around the corner,” and the vehicle can go for a day without a human driver intervening, said a person familiar with Waymo. Other days reality sets in because “the edge cases are endless.”’

Some independent observations appear to confirm this assessment. As much as I agree that Waymo is probably the most advanced in this game, this does not mean they are anywhere near to actually deploying anything seriously, and even further away from making such deployment economically feasible (contrary to what is suggested in occasional puff pieces such as this one). Aside from a periodic PR nonsense, Waymo does not seem to be revealing much, though recently some baffling reports of past shenanigans in google chauffeur (which later became Waymo) surfaced, involving Anthony Levandowski who is responsible for the whole Uber-Waymo fiasco. To add some comical aspect to the Waymo-Uber story, apparently an unrelated engineer managed to invalidate the patent that Uber got sued over, spending altogether 6000 dollars in fees. This is probably how much Uber payed their patent attorneys for a minute of their work…

Speaking of Uber they substantially slowed their self-driving program, practically killed their self driving truck program (same one that delivered a few crates of beer in Colorado in 2016 with great fanfares, a demo that later on turned out to be completely staged), and recent rumors indicate they might be even looking to sell the unit.

‘Generally the other self driving car projects are facing increasing headwinds, with some projects already getting shut down by the government agencies, and others going more low-key with respect to public announcements. Particularly interesting news came recently out of Cruise, the second in the race right after Waymo (at least according to California disengagement data). Some noteworthy bits from the Reuters article:

Those expectations are now hitting speed bumps, according to interviews with eight current and former GM and Cruise employees and executives, along with nine autonomous vehicle technology experts familiar with Cruise. These sources say that some unexpected technical challenges – including the difficulty that Cruise cars have identifying whether objects are in motion – mean putting GM’s driverless cars on the road in a large scale way in 2019 is looking highly unlikely.

“Nothing is on schedule,” said one GM source, referring to certain mileage targets and other milestones the company has already missed.”

“Now I could not care less about results in game domains, since as I stated multiple times on this blog [1, 2], the only problem really worth solving in AI is the Moravec’s paradox, which is exactly the opposite of what DeepMind or OpenAI are doing, but I nevertheless found this media misfire hilarious.”

Data should be seen as labor rather than capital (Glen Weyl):

Advertisements