Adjunct Professor, Stanford University
Edward Y. Chang is an adjunct professor of Computer Science at Stanford University since 2019. His current research interests are consciousness modeling, meta learning, and healthcare. Chang received his MS in CS and PhD in EE, both from Stanford University. He joined the ECE department of UC Santa Barbara in September 1999, was tenured in 2003 and promoted to full professor in 2006. From 2006 to 2012, Chang served at Google as a director of research, leading research and development in areas including scalable machine learning, indoor localization, Google QA, and recommendation systems. In subsequent years, Chang served as the president of HTC Healthcare (2012-2021) and a visiting professor at UC Berkeley AR/VR center (2017-2021), working on healthcare projects including VR surgery planning, AI-powered medical IoTs, and disease diagnosis. Chang is an ACM fellow and IEEE fellow, for his contributions to scalable machine learning and healthcare.
AI/Towards Artificial General Intelligence via Consciousness Modeling
This talk first presents the limitations of some widely used AI algorithms. To overcome these limitations and to strive towards artificial general intelligence (AGI), we investigate modeling consciousness on top of the current AI stack. Yushua Bengio proposed the development of system two on top of the current AI stack, which he named system one. Citing Daniel Kahneman’s idea , Bengio maps system one to Kahneman’s thinking fast---thinking unconsciously, intuitively, and habitually; and system two to thinking slow---thinking consciously and logically. To lay out the foundational infrastructure for modeling system two or consciousness, we first present “what is consciousness,” followed by “where is consciousness” based on scientific findings in physics, biology, and psychiatry. Based on Erwin Schrödinger’s Dublin lecture series in 1944 , the transition between consciousness and unconsciousness can be explained in classical physics and quantum mechanics, and therefore can be followed to model. Recent advancements in fine-grained neuron control using optogenetics opens tremendous opportunities to understand how various parts of the human brain works to learn, think, and plan. This talk will depict significant works in modeling consciousness, free will and ethics, and discuss required foundational hardware/software infrastructures for making progress towards AGI.
Research Scientist, HPE
Eitan Frachtenberg is a researcher at HPE labs. He held prior positions as a visiting associate professor of computer science at Reed College in Portland, Oregon, and as a researcher at Facebook, Microsoft, and Los Alamos National Laboratories. His research interests include all aspects of computer systems, optimization algorithms, and data science. He holds a PhD in computer Science from the Hebrew University in Jerusalem, Israel.
IT for Sustainability—Sustainable IT
Sustainability has always been an important topic for business and, more broadly, for humanity. With growing global recognition of the need to curb carbon emissions, it has become a strategic priority for many CEOs and boardrooms. Companies are making pledges to become carbon neutral by 2050, 2040, and even as early as 2030. The situation has become critical with climate change beginning to impact an increasingly broad global population. Many corporations and governments around the world are trying to understand and quantify their share of carbon emissions, how to reduce their share to net-zero, and perhaps even reverse emissions to achieve a net positive impact by removing more carbon than they are emitting. IT can play a crucial role all of these aspects while also improving its own carbon footprint. In this talk we address both topics using examples from Hewlett Packard Enterprise. We dissect HPE’s carbon footprint by describing what we report and breaking that down into contributions from our upstream and downstream operations. We present how HPE’s systems and data centers practice sustainability. For sustainable IT, we showcase Frontier, the first exascale computer and describe how it not only got at the top of top 500 but also at the top of Green 500. For IT for sustainability, we discuss how we use reinforcement learning to make clean energy more efficient.
Director, Data Insights, SAP
Harvind held leadership positions for many small and large companies to create disproportionate impact through data. At Apple: streamlined $50B forex hedging, Apple Watch 2 successful sales model. At Upwork: Investor insights for successful IPO and built the cloud data platform from scratch. Now at SAP, changing the way we think about analytics and ML to influence product roadmaps. He holds an Engineering degree from India and MBA from Santa Clara University.
Agile and Flexible Data Infrastructure for faster Business Impact
SAP LOBs relied on self-managed data infra with typical scalability, reliability and limited analysts/ML capacity challenges. In last 3 years, we created a an agile, elastic, flexible infra that not only create quick impact but also allows collaboration across LOBs. This talk presents, final vision that we are working towards with following objectives in mind: 1) Faster Time to Market: Data sourcing to decision making through tech/tool stack and processes; 2) Agile Insights: Moving away from waterfall to partnership/iterative models; 3) Scaling the capacity of data organizations with right tools; 4) Interconnected across different lines of businesses: SAP customers don't run business functions in isolation so data has to allow that interconnected insights/learning.
Senior Staff Engineer, Airbnb
Anna is a senior staff engineer at Airbnb where she focuses on modernization of internal systems including booking orchestration and messaging. Prior to Airbnb she worked on developing machine learning models for automated diagnosis and lung cancer detection from medical imaging at Google Brain and Google Healthcare. Anna was also an early engineer at Pinterest and a ranking engineer at Google Image Search. She holds a PhD in Computer Science from UCLA.
Large Scale System Migrations - What We Got Right and What we Got Wrong
Large scale system migrations are complex, multi-year ventures. While frequently disruptive, they are often necessary to align with a new company’s strategy, improve the systems’ scalability and security, achieve regulatory compliance or reduce cost. This presentation will showcase a wide range of migration projects undertaken by Airbnb and other companies. We will identify things that went well, pitfalls and learnings for the future. While it is difficult to generalize across domains and architectures, there are common challenges and take-aways. One of the key difficulties is executing migration milestones without disturbing production traffic. This sometimes requires several weeks of validations while both the old and new systems are live in production. Common learnings include the importance of setting realistic expectations, careful planning of migration milestones, process for finalizing the long tail as well as proper testing and monitoring.
Software Engineer / Tech Lead, Pinterest
Ambud Sharma is a tech lead at Pinterest. He has worked on architecting, stabilizing, and scaling data systems at Pinterest. Prior to Pinterest he has worked on building several petabyte-scale distributed systems at multiple Fortune 500 companies and has over 10 years of experience developing Distributed Systems.
Scaling Data Transportation and Ingestion with MemQ
Scaling and cost sustainable Data Ingestion and Transportation is a key challenge in Data Eng space and impacts ML Training, batch and real-time analytics etc. The current set of open source technologies have been limited in providing true cloud native scaling for data ingestion / transportation and PubSub. We at Pinterest have been able to develop a new PubSub system called MemQ that leverages pluggable storage like cloud native object stores to provide linear and unrestrictive scaling for data ingestion. MemQ currently powers all ML training ingestion at Pinterest with up to 90% savings over Kafka. In this talk we want to share this design pattern of using disaggregated cloud native storage to build scalable systems.
Chandra Krintz is a Professor of Computer Science (CS) at UC Santa Barbara and Chief Scientist at AppScale Systems Inc. Chandra holds M.S./Ph.D. degrees in CS from UC San Diego. Chandra's research interests include programming systems, cloud and big data computing, and the Internet of Things (IoT). Chandra has supervised and mentored over 70 students, has published her work in a wide range of top venues, is the recipient of multiple teaching and research awards, and has led several educational and outreach programs that introduce computer science to young people.
From Cloud to Farm: Open-Source Edge Infrastructures for Precision Agriculture
In this presentation, we overview UCSB SmartFarm. SmartFarm is an open source computing infrastructure that combines data from a variety of sensors, integrates recent advances in data analytics, machine learning, and user interfaces (compatible with those available from public clouds), and implements support for automatic self-management and fault resilience, precluding the need for an IT staff to maintain the system. SmartFarm combines these technologies to provide on-farm decision support for growers and to automate and inform precision agriculture solutions and farm operations.
Director, Networking and Distributed Systems Lab, HPE
Puneet Sharma is Director of Networking and Distributed Systems Lab and a Distinguished Technologist at Hewlett Packard Labs where he leads research on Edge2Cloud Infrastructure for 5g, IoT and AI. Prior to joining HP Labs, he received a Ph.D. in Computer Science from the University of Southern California and a B.Tech. in Computer Science & Engineering from the Indian Institute of Technology, Delhi. Puneet has delivered Keynotes at various forums such as IEEE 5G Startup Summit, NFV World Congress 2016 and IEEE LANMAN 2014. Puneet has also contributed to various Internet standardization efforts such as co-authoring UPnP’s QoS Working Group’s QoSv3 standard and the IETF RFCs on the multicast routing protocol PIM. He has published over 100 research articles in various prestigious networking conferences and journals (Transactions on Networking, ACM SIGCOMM, ACM HotNets, USENIX NSDI, IEEE INFOCOM, etc.). His work on Mobile Collaborative Communities was featured in the New Scientist Magazine. He has been granted 50+ US patents. Puneet was named Fellow of IEEE in 2014 for contributions to the design of scalable networking, software defined networks and energy efficiency in data centers. He was also recognized as a Distinguished Member of ACM for contributions to computing research. Puneet was listed in 2020’s AI 2000 Most Influential Scholars list for last decade (2009-2019).
Complexities and Challenges of Operating an as-a-Service Edge-to-Cloud Platform
Consumers want a unified view of their applications, data, and services everywhere with on-premises data-centers, colocation, or public cloud offerings. In this talk, I will share our experience with deploying cloud-managed on-premise infrastructure and managing customer hybrid/multi-cloud estate. We will discuss the complexities and challenges of scaling such a managed edge-to-cloud platform, particularly heterogeneity, connectivity, data management, and application diversity across private-public network boundaries.
Software Engineer, Meta Platforms, Inc.
Sergey Smirnov is a software engineer at Meta. He works in Real Time Machine Learning team that aims to evolve Meta's Infrastructure Systems through the use of Machine Learning. Prior to Meta Sergey worked Google Search and Apple Siri. He has been in the industry for more than 20 years and has vast experience in infrastructure and distributed systems.
Chief Architect and GM for HPC and AI Cloud Services, HPE.
Now that we have delivered Exascale capability to the world, what's next? In this talk, Nic will cover the challenges that came with getting ORNL's Frontier system to perform at 1.102 Exaflops and how this will accelerate the research on some of the world's most pressing issues. He will then propose a view to the future where supercomputers are tied to instruments at the edge, and to users all over the cloud.