Amazon Web Services wants to bring machine learning to the enterprise and start-up masses, releasing a fully managed end-to-end machine learning service called Sagemaker and a video camera that runs deep learning models dubbed DeepLens.
"Machine learning is so tantilising for most everyday developers and scientists. The hope and the hype here is tremendous. And you could argue with all the buzzwords we've heard in the 11 years we’ve been doing AWS, machine learning might be the loudest, and it’s absolutely the buzzword du jour today," said AWS CEO Andy Jassy at the company's Re:Invent conference in Las Vegas.
“Builders don’t want machine learning to be so difficult. They don’t want it to be so cryptic. They don’t want it to be black box. They want it to be much easier to engage with,” he added at the high-production keynote, complete with house band and tenuous musical segways.
The main steps to standing up a machine learning model in SageMaker begin with setting up a Jupyter notebook for data exploration, cleaning, and preprocessing your data. These can run on general instance types or GPU powered instances if required.
Users can then utilise any of ten common supervised and unsupervised learning algorithms and frameworks which are built into the product, or create their own. The training can scale to tens of instances to support faster model building.
By removing some of the big hurdles of building machine learning models, Jassy said the techniques will be within reach of businesses without the need to employ specialists.
“There just aren’t that many machine learning expert practitioners in the world. Most end up living at the big technology companies. And if you want to enable most enterprises and companies to be able to use machine learning in an expansive way we have to solve the problem for making it accessible for every day developers and scientists,” he explained.
Here’s looking at you
Jassy also launched a $245 high definition camera – DeepLens – which comes loaded with a set of pre-trained machine learning models to give developers ‘hands on experience’ in image detection and recognition.
Developers can also train their own models with SageMaker and run them on the camera.
“These models will help you detect cats and dogs, faces, a wide array of household and everyday objects, motions and actions, and even hot dogs. We will continue to train these models, making them better and better over time,” said AWS chief evangelist Jeff Barr in a blog post.
The four megapixel camera can capture 1080P video, and sound through a 2D microphone array.
DeepLens runs Ubuntu 16.04 and is preloaded with AWS’ Greengrass Core. There’s also a device-optimised version of MXNet, and the flexibility to use other frameworks such as TensorFlow and Caffe2.
Sign up for MIS Asia eNewsletters.