Sara is a Developer Advocate on Google's Cloud Platform team, where she helps with developer relations through online content, outreach and events. She has a bachelor’s degree in Business and International Studies from Brandeis University. When she's not programming, she can be found running, listening to country music, or finding the best ice cream in SF.
Ever wondered about the technology behind Google Photos? Or wanted to build an app that performs complex image analysis, like detecting objects, faces, emotions, and landmarks? The new Google Cloud Vision API exposes the machine learning models that power Google Photos and Google Image Search. Developers can now access these features with just a simple REST API call. We’ll learn how to make a request to the Vision API to classify images, extract text, and even identify landmarks like Harry Potter World. Then we'll live code an iOS app that implements image detection with the Vision API.
In 2004 Google published the MapReduce paper, a programming model that kick-started big data as we know it. Ten years later, Google introduced Dataflow - a new paradigm, integrating batch and stream processing in one common abstraction. This time the offer was more than a paper, but also an open source Java SDK and a cloud managed service to run it. In 2016 big data players like Cask, Cloudera, Data Artisans, PayPal, Slack, Talend joined Google to propose Dataflow for incubation at the Apache Software Foundation - Dataflow is here, not only unifying batch and streaming, but also the big data world.
In this talk we are going to review Dataflow's differentiating elements and why they matter. We’ll demonstrate Dataflow’s capabilities through a real-time demo with practical insights on how to manage and visualize streams of data.