1. 程式人生 > >Introducing Project Kinect for Azure

Introducing Project Kinect for Azure

https://www.linkedin.com/pulse/introducing-project-kinect-azure-alex-kipman/

Hello everyone!

Microsoft Build is upon us once again. It’s my favorite time of year because it’s so exciting to introduce our developer community to the newest tools that will empower them to accelerate the world’s digital transformation and create the future.

During Satya Nadella’s Build keynote, he introduced the world to one such tool that may sound a little familiar: Project Kinect for Azure. I wanted to take a little more time to expand upon this project, what it means and the role it will play in enabling developers to apply AI over the real world in profound new ways.

What Satya described is a key advance in the evolution of the intelligent edge; the ability for devices to perceive the people, places and things around them. One of the things that makes Project Kinect for Azure unique and compelling is the combination of our category-defining depth-sensor with our Azure AI services that, together, will enable developers to make the intelligent edge more perceptive than ever before.

The technical breakthroughs in our time-of-flight (ToF) depth-sensor mean that intelligent edge devices can ascertain greater precision with less power consumption. There are additional benefits to the combination of depth-sensor data and AI. Doing deep learning on depth images can lead to dramatically smaller networks needed for the same quality outcome. This results in much cheaper-to-deploy AI algorithms and a more intelligent edge.

Earlier this year, Cyrus Bamji, an architect on our team, presented a well-received paper to the International Solid-State Circuits Conference (ISSCC) on our latest depth sensor. This is the sensor that Satya described onstage at the Build conference and is also the sensor that will give the next version of HoloLens new capabilities. The technical characteristics that make this new depth sensor best-in-class include:

  • Highest number of pixels (megapixel resolution 1024x1024)
  • Highest Figure of Merit (highest modulation frequency and modulation contrast resulting in low power consumption with overall system power of 225-950mw)
  • Automatic per pixel gain selection enabling large dynamic range allowing near and far objects to be captured cleanly
  • Global shutter allowing for improved performance in sunlight
  • Multiphase depth calculation method enables robust accuracy even in the presence of chip, laser and power supply variation.
  • Low peak current operation even at high frequency lowers the cost of modules

The Kinect brand has a storied history, from gaming peripheral and developer technology to the depth-sensing magic inside Microsoft HoloLens, the world’s first fully self-contained holographic computer. HoloLens today features depth-camera technology evolved from Kinect hardware, which, in conjunction with other cutting-edge technology, is already transforming businesses as we embrace the era of mixed reality.

Our vision when we created the original Kinect for Xbox 360 was to produce a device capable of recognizing and understanding people so that computers could learn to operate on human terms. Creative developers realized that the technology in Kinect (including the depth-sensing camera) could be used for things far beyond gaming. In the second generation of Kinect we improved the gaming peripheral but also provided developers with a version that could connect to a PC with Kinect for Windows. The outcome was great innovation and creativity from our developer community. We discontinued production of second generation Kinects last year, however we worked with Intel to ensure Windows developers can continue building PC solutions with Intel’s RealSense depth cameras.

With HoloLens, we saw incredible results when we took some of the magic of Kinect and applied it in a mixed reality context. The current version of HoloLens uses the third generation of Kinect depth-sensing technology to enable it to place holograms in the real world. With HoloLens we have a device that understands people andenvironments, takes input in the form of gaze, gestures and voice, and provides output in the form of 3D holograms and immersive spatial sound. With Project Kinect for Azure, the fourth generation of Kinect now integrates with our intelligent cloud and intelligent edge platform, extending that same innovation opportunity to our developer community.

Project Kinect for Azure unlocks countless new opportunities to take advantage of Machine Learning, Cognitive Services and IoT Edge. We envision that Project Kinect for Azure will result in new AI solutions from Microsoft and our ecosystem of partners, built on the growing range of sensors integrating with Azure AI services. I cannot wait to see how developers leverage it to create practical, intelligent and fun solutions that were not previously possible across a raft of industries and scenarios.

To learn more, please visit https://developer.microsoft.com/perception

I’m thrilled to continue the Kinect journey with all of you through Project Kinect for Azure, and we look forward to sharing much more with you over the coming months. As always, feel free to reach out to me on Twitter in the meantime.

Enjoy the rest of Microsoft Build 2018!

Alex