Change the world in the next 100 years? Artificial intelligence is becoming a new era


This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!

[Netease Smart News November 12 news] Just as electricity has changed the way industry has operated in the past century, artificial intelligence will change society significantly in the next 100 years. AI is being integrated into home robots, robot taxis, and mental health chat bots. A startup company is using AI technology to develop robots and bring them closer to the level of human intelligence. AI itself has entered people's daily lives, such as powering digital assistants Siri and Alexa's brains. It allows consumers to shop and search online more accurately and efficiently, and to perform other tasks that people take for granted.

Dr. Andrew Ng, a coursera co-founder and professor at Stanford University, delivered a keynote address at the AI ​​Frontier Conference in Silicon Valley last week: “AI is like a new kind of electricity. About 100 years ago, electricity changed every time. A major industry, AI has developed to the same level and has the ability to change all mainstream industries in the next few years.” Wu Enda said that although people think that AI is a fairly new technology, it has actually existed for decades. But it is only now taking off, thanks to the expansion of data and computing power.

Wu Enda said that most of the value created through AI is now achieved through supervised learning. But there are two big waves of progress: one wave uses deep learning to predict whether consumers will click online advertising after the algorithm gets information about him. When the output is no longer a number or an integer, but rather a sentence structure of speech recognition, another language or audio, a second wave of progress occurs. For example, in a driverless car, the input of an image may form other vehicle position output on the road.

Xuedong Huang, Microsoft's chief scientist, said that in fact, deep learning (that is, computers learning from the data set to performing functions rather than performing the specific tasks it is programmed to) is helpful in achieving human speech recognition goals that are comparable to humans. . In 2016, Huang Xuedong led the Microsoft team to historic achievement when their system recorded an error rate of 5.9%, which is the same as that of human transcriptionists. Huang Xuedong said at the meeting: "Thanks to deep learning, we can reach the level of humanity after 20 years." After that, the team further reduced the error rate to 5.1%.

The rise of digital assistants

From 2010, the quality of speech recognition began to improve, and eventually Siri and Alexa were born. Wu Enda said: "Now, you almost think that this is taken for granted. In addition, Amazon Alexa director Ruhi Sarikaya said that, in addition, voice is expected to replace touch input. The key to improving accuracy is to understand contextual context, for example, if a person Ask Alexa what dinner should do and the digital assistant must assess his intentions.Does he want Alexa to go to a restaurant to order a place, order a meal or find a recipe?If he asks Alexa to find Hunger Games, he is thinking Want to listen to music, watch videos, or listen to audio books?

Google’s research scientist Dilek Hakkani-Tur said that the next step in digital assistants’ search will be a more advanced task, understanding “beyond the meaning of words”. For example, if the user uses the phrase "later today", this might mean meeting between 7pm and 9pm, or between 3pm and 5pm. Hakani-Tour said that the next stage will also require more complex and lively dialogues, multi-disciplinary tasks, and interactions that go beyond the boundaries of the field. In addition, digital assistants should be able to do more things, such as easily reading and summarizing emails.

After speech recognition, it is "computer vision," the ability of computers to recognize images and classify them. With many people uploading pictures and videos, adding metadata to all content becomes very cumbersome, which requires a way to classify them. According to Manohar Paluri Lumos, a visual identity expert at the Facebook artificial intelligence research institute, Facebook has developed an AI called Lumos that understands and categorizes videos on a large scale. Facebook uses Lumos for data collection, such as collecting fireworks images and videos. The platform can also use people's gestures to identify the video, such as the scene that people are busy around the sofa is classified as "will go out to stroll."

Rahul Sukthankar, Google’s director of video understanding, added that the key is to determine the main semantic content of the uploaded video. To help the computer correctly identify the content in the video, Susankar's team tapped YouTube for similar content that AI can learn, such as specific frame rates for non-professional content. Su Shankar added that an important direction for future research is the use of video to train computers. So, if the robot sees a video of a person pouring cereal into multiple bowls, it should be able to learn by observing the video.

Alibaba uses AI to promote sales. For example, shoppers on Taobao's e-commerce website can upload pictures of products they want to buy, such as trendy handbags for a stranger on the street, and the site will provide handbags closest to photos. Alibaba also uses Augmented Reality (AR)/Virtual Reality (VR) technology to allow people to browse and shop in stores such as Costco. On the Youku video site, Alibaba is developing a way to increase the revenue by inserting virtual 3D objects into user-uploaded videos. This is because many video sites are working to increase their profitability. Alibaba's chief scientist, Xiaofeng Ren, said: "YouTube is still losing money."

Rosie and home robot

Although AI technology continues to advance and improve, it still cannot match the human brain. Vicarious is a startup company that aims to narrow the gap between AI and human intelligence by developing robots that are equivalent to the level of human intelligence. Its co-founder, Dileep George, said that these components are all used to assemble intelligent robots. He said: "We have cheap motors, sensors, batteries, plastics, and processors... Why can't we have Rossi?" He was referring to the 1960s spacetime cartoon "The Jetsons". In the multi-purpose robot maid. George said that the current level of AI is like what he calls the "old brain," similar to mice's cognitive ability. The "new brain" is more developed, such as what is seen in primates and whales.

George said that when the smaller input is changed, the "old brain" will be confused. For example, a robot that can play video games can make mistakes when the colors become brighter. He said: "Today's AI is not yet ready!" Vicarious uses deep learning technology to make robots closer to human cognition. In the same test, despite the change in brightness, a robot with Vicarious AI was able to consistently play this game. Another thing that confuses the "old brain" is to put two objects together. One can see two things stacked together. For example, a coffee cup hides a vase in a photo, and robots often mistake it for an unidentified object. Vicarious's goal is to solve this type of problem, and Facebook CEO Mark Zuckerberg is also its investor.

As a robot companion and a video photographer, the situation in Kuri is different. Kaijen Hsiao, chief technology officer of Kuri's development company Mayfield Robotics, said that there is a camera behind the left eye of the robot that can collect video in HD format. Kuri has a depth sensor to draw a map of the house and use the image to improve navigation. She also has pets and human detection capabilities so that when they appear around, Kuri can show a smile or react. Kuri also has the ability to identify, so even if the light changes, she can remember where she had been, such as the day or night kitchen. "Instant Selection" is another feature of this robot that allows Kuri to identify similar videos that she recorded, such as Daddy's teasing baby in the living room, while also removing the extra video.

Kaijen Hsiao explained: “Kuri's job is to bring a breath of life to your family. It can also provide entertainment services such as playing music, podcasts, audio books, etc. You can view your home from anywhere.” Kuri is this A video producer for the family, it walks around the room. Kuri uses a vision and deep learning algorithm. Kaijen Hsiao said: “The biggest feature of Kuri is its own personality and it can be a lovely companion.” Kuri will be available in December and will sell for $799.

Business response to AI

James Manyika, chairman and director of the McKinsey Global Institute, believes that the United States and China have the world's leading investment in the AI ​​field. Last year, AI investment in North America ranged from 15 billion U.S. dollars to 23 billion U.S. dollars, while in Asia (mainly China) the investment was 8 billion U.S. dollars to 12 billion U.S. dollars, while in Europe, it was only 3 billion U.S. dollars to 4 billion U.S. dollars. Backward state. The tech giant is a major investor in the AI ​​space, with funding ranging from $9 billion to $30 billion, plus $6 billion to $20 billion. Investors include venture capitalists and private equity companies.

Where did they invest their money? Machine learning accounted for 56% of total investment, followed by computer vision, accounting for 28%. Natural language accounted for 7%, driverless cars accounted for 6%, and the rest was divided by virtual assistants. However, Manka stated that despite the increasing investment, the practical application of AI is still limited, even for companies that know their capabilities. About 40% of companies are considering deploying AI, 40% of companies have experimented, and only 20% have adopted AI in some areas.

The reason why they remain silent is because 41% of the respondents believe that their investment returns are not high, 30% of the respondents said that their commercial value is not enough, and other companies said that they do not have AI skills. However, McKinsey believes that AI can more than double the impact of other analysis, and may significantly improve corporate performance.

Some companies can do this. For example, leading industries in the AI ​​field include telecommunications and technology companies, financial institutions, and car manufacturers. Manka said that these early adopters are often larger, digitally mature companies that integrate AI into their core activities and focus on growth and innovation rather than cost savings and need the CEO's support. The slowest adopters are companies in the healthcare, tourism, professional services, education, and construction industries. However, experts say that with the popularity of AI, it is only a matter of time before companies adopt AI on a large scale.

(From: Knowledge@Wharton Compilation: NetEase Seeker Compilation Robot Review: Little)

Pay attention to NetEase smart public number (smartman163), obtain the latest report of artificial intelligence industry.

3D Home Gaming Projector

With the improvement of people's quality of life, more and more people will choose to buy 3D projectors at home, which can be used to play movies on weekends and release work pressure.

advantage:
a. Compared with the traditional Office Projector, the internal structure of the home 3D projector is simpler and the volume is much smaller, which is convenient to move and carry.

b. It can bring a better viewing effect to the user, and its large screen can bring a certain visual impact to the user.

c. Compared with traditional projectors, home 3D projectors have more advantages in the color of the picture, which can reach 120% of the NTSC color gamut, and the colors are rich and diverse.

3d home theater projector,3d home projector,3d 1080p projector,3d projector video,3d projector 2022

Shenzhen Happybate Trading Co.,LTD , https://www.happybateprojector.com