“What do you think they were trying to get us to do with those?” Mr. Benioff asked onstage at the New Work Summit. “I don’t think this is any different now. They’re focused on the addictive nature of these user interfaces. At some point, we have to say, ‘Hold on.’”
Silicon Valley is nearing the point where moral lines will have to be drawn by the government since the industry is not regulating itself and does not seem to be guided by a “strong hand of ethics,” he said.
There is a “crisis of trust,” he added, that will become a “tidal wave” among American consumers who are growing wary of tech giants.
“I’m trying to protect our industry,” said Mr. Benioff, whose company sells access to business software over the internet. “If companies aren’t able to regulate themselves, they should have external regulations.”
One of the solutions he suggested was to move decision-making away from algorithms and back toward humans. He cited Facebook’s decision to disband its trending news curation team as an example.
“Facebook had a team promoting, sorting and focusing stories, 100 curators that they removed and put an algorithm in their place, and that started the downward spiral that they’re in today,” Mr. Benioff said.
He also spoke about the recent ban on kegs of beer at Salesforce offices after he recently stumbled across one while visiting a company he had acquired.
“I view alcohol as a drug,” he said. “The keg is not there anymore — well, actually who knows, but I hope it’s not. I took a photo of the keg and put it on our social network and said, ‘This is not who we are.’” — Nellie Bowles
Should A.I. be more ‘human’?
Fei-Fei Li, a chief scientist at Google and a Stanford professor, has called on technologists to take a more “human centered” approach to the creation of artificial intelligence. On Tuesday at the New Work Summit, Ms. Li said that researchers must work to ensure that A.I. embodied human qualities and that it would ultimately operate alongside humans, not replace them.
“I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it,” she said. “A.I. is made by humans, intended to behave by humans and, ultimately, to impact humans lives and human society.”
At Stanford, Ms. Li was instrumental in the recent rise of “computer vision” systems that can recognize people and objects entirely on their own. At Google, she is working to package and sell these and other systems as cloud computing services, delivering the latest A.I. technology to a wide range of businesses.
But she said that as Google and other internet giants pushed these techniques forward, academia and the government must help ensure that A.I. evolved into something that enhanced our humanity, created as many jobs as it replaced and operated in safe and predictable ways.
In particular, Ms. Li said, academic institutions can help ensure that computer scientists work alongside social scientists in building this new breed of technology.
“A.I. has outgrown its origin in computer science,” she said.
Ultimately, said Ms. Li, who was born in China, A.I. reflects the people who build it more than other technologies do. For that reason and others, she said, A.I. researchers must work in a way that spans not only many industries but many cultures as well.
“I really believe there are no borders for science,” she said. — Cade Metz
Why American tech companies struggle in China
Tuesday’s first speaker at the New Work Summit was Kai-Fu Lee, who used to lead Google in China and knows a thing or two about American tech giants in China. His prognosis about whether companies like Facebook will ever be able to crack the world’s largest internet market?
“The American products are simply uncompetitive in the China market,” said Mr. Lee, who is now chief executive of Sinovation Ventures, a venture capital firm focused on Chinese technology. Even if internet titans from the United States could operate in China, he said, the local competition means they would have a hard time thriving.
“Messenger is a much worse product than WeChat,” he said, referring to Facebook’s messaging app and Tencent’s ubiquitous app for chatting, social networking, making payments and other tasks.
“Amazon in China is substantially worse than Taobao, JD and Tmall,” he said, referring to three leading Chinese e-commerce sites. And, he said, “Apple Pay is much narrower and much harder to use than WeChat or Alipay.”
Mr. Lee sees other issues that augur against a big Facebook or Google renaissance in China. Multinational companies tend not to hire local managers to lead their China operations. “They’re not concerned about winning in the local market,” he said.
Also, young Chinese these days would rather work for national champions like Alibaba or Tencent. Pitted against Chinese start-ups and big companies, where the hours tend to be long and the work culture cutthroat, the leading lights of American tech would get “get eaten for lunch.” — Raymond Zhong
Trump administration silent on A.I.
Last year, the Chinese government unveiled a plan to become the world leader in artificial intelligence by 2030, vowing to create a domestic industry worth $150 billion. This manifesto read like a challenge to the United States, and in many ways it echoed policies laid down by the Obama administration in 2016.
But as China pushes ahead in this area, many experts are concerned that the Trump administration is not doing enough to keep the United States ahead in the future. Although the big United States internet giants are leading the A.I. race, these experts believe the country as a whole could fall behind if does not do more to nurture research inside universities and government labs. — Cade Metz
Waymo C.E.O. ‘really happy’ with Uber settlement
John Krafcik, chief executive of the self-driving car company Waymo, took the stage at the New Work Summit on Monday night and spoke out for the first time since his company reached a settlement last week with Uber in a lawsuit over trade secrets that riveted Silicon Valley.
“We were really happy with the outcome that we engineered,” Mr. Krafcik said. “We spent a lot of time in that case talking about the hardware, but the extra benefit we got from that suit was the ability to understand and ensure that Uber wasn’t using any of our software.”
He called the software Waymo’s “secret sauce.”
Waymo and Uber spent only four days at trial last week before settling, with Uber agreeing to provide Waymo 0.34 percent of its stock, worth about $245 million. The dispute between the companies started in 2016 when Uber bought Otto, a start-up founded by Anthony Levandowski, an early member of Google’s self-driving car program. Waymo, which was spun out of Google, accused Mr. Levandowski of stealing technology before leaving and accused Uber of using the misappropriated knowledge.
“This was a really special case with a really special set of circumstances,” Mr. Krafcik said. “For us, this was always about, and really just about, the fact that we needed to ensure Uber wasn’t using our trade secrets.” He added that he did not foresee Waymo suing other former employees.
Mr. Krafcik also discussed how Waymo was looking to start a ride-hailing service, which it is testing in Phoenix with thousands of driverless Pacifica minivans.
“We have a plan to move from city to city,” he said. “We’re not going to be launching with a 25 mile-per-hour product. We’re talking about a full-speed service that will serve a very large geographic area with essentially unlimited pickup and drop-off points.” — Nellie Bowles
No, Amazon isn’t using A.I. to cut jobs
Jeff Wilke, Amazon’s chief executive of its consumer business, which includes its e-commerce operations, doesn’t often make public appearances. But on Monday night, he joined the New Work Summit to discuss the internet retailer’s move into artificial intelligence.
His key message: A.I. is everywhere, but that doesn’t mean it will take our jobs.
“If you look at the evolution of technology over the course of decades, tech doesn’t eliminate work; it changes work,” Mr. Wilke said.
He said that over the last five years, since Amazon bought a robot maker called Kiva Systems, it had built 100,000 of the robots — and also hired 300,000 people. “We still need human judgment,” he said.
Amazon has also embedded A.I. throughout the company, he added, with technologists working together with people who run businesses. The company is using machine learning and deep learnings, which are different flavors of A.I., to upgrade internal algorithms, he said.
As to how Amazon might use A.I. at Whole Foods, the grocery store chain that it said it would acquire last year, Mr. Wilke said little. When asked whether Amazon would integrate its cashier-less and A.I.-driven convenience store concept, called Amazon Go, with Whole Foods, he said, “I don’t foresee the format of Whole Foods changing very much.” — Pui-Wing Tam
A.I. has become a campaign issue
As A.I. technology barrels ahead in Silicon Valley, it’s also starting to pick up steam as a political issue in Washington.
Over the weekend, I wrote about Andrew Yang, a former tech executive who has decided to run for president in 2020 as a Democrat on a “beware the robots” platform. He thinks that with innovations like self-driving cars and grocery stores without cashiers just around the corner, we’re about to move into a frightening new era of mass unemployment and social unrest.
So he’s proposing a universal basic income plan called the “Freedom Dividend,” which would give every American adult $1,000 a month to guarantee them a minimum standard of living while they retrain themselves for new kinds of work.
Mr. Yang’s campaign is a long shot, and there are significant hurdles to making universal basic income politically feasible. But the conversation about automation’s social and economic consequences is long overdue. Even if he doesn’t win the election, Mr. Yang may have hit on the next big political wedge issue. — Kevin Roose
Artificial intelligence may be biased
In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it, as Steve Lohr recently wrote, and that means that some of the biases in the real world can seep into A.I.
If there are many more white men than black women in the system, for example, it will be worse at identifying the black women. That appears to be the case with some popular commercial facial recognition software.
Joy Buolamwini, a researcher at the M.I.T. Media Lab, found that the software can now tell if a white man in a photograph is male or female 99 percent of the time. But for darker skinned women, it is wrong nearly 35 percent of the time. — Joseph Plambeck
Continue reading the main story