AWS re:Invent 2024 - CEO Keynote with Matt Garman

AWS re:Invent 2024 - CEO Keynote with Matt Garman

Show Video

PLEASE WELCOME THE CEO OF AWS MATT GARMAN. [MUSIC] HELLO EVERYONE, AND WELCOME TO THE 13TH ANNUAL AWS RE:INVENT. SO AWESOME TO SEE YOU ALL HERE. NOW THIS IS MY FIRST EVENT AS CEO, BUT IT'S NOT MY FIRST RE:INVENT. I'VE ACTUALLY HAD THE PRIVILEGE TO HAVE BEEN TO BE TO

EVERY RE:INVENT SINCE 2012. NOW, 13 YEARS INTO THIS EVENT, A LOT HAS CHANGED. BUT WHAT HASN'T IS WHAT MAKES RE:INVENT SO SPECIAL. BRINGING TOGETHER THE PASSIONATE, ENERGETIC AWS COMMUNITY TO LEARN FROM EACH OTHER. JUST HEARING YOU THIS MORNING AND AS YOU WALK THROUGH THE HALLS, YOU CAN FEEL THE ENERGY AND I ENCOURAGE YOU ALL TO TAKE ADVANTAGE OF THIS WEEK TO LEARN FROM EACH OTHER. THIS YEAR WE HAVE ALMOST 60,000

PEOPLE HERE IN PERSON AND ANOTHER 400,000 WATCHING ONLINE. THANK YOU TO EVERYONE WHO'S WATCHING. AND WE HAVE IN PERSON HERE. WE HAVE 1900 SESSIONS FOR YOU ALL TO ATTEND AND ALMOST 3500 SPEAKERS AND MANY OF THOSE SPEAKERS AND SESSIONS ARE LED BY CUSTOMERS, PARTNERS AND AWS EXPERTS, MANY OF WHOM ARE IN THE AUDIENCE TODAY. THANK YOU SO MUCH. YOU'RE SHARING YOUR CONTENT AND YOUR EXPERTISE IS WHAT MAKES RE:INVENT SO SPECIAL. THANK YOU. [APPLAUSE]

NOW, RE:INVENT HAS SOMETHING FOR EVERYBODY. IT HAS STUFF FOR TECHNOLOGISTS, FOR EXECUTIVES, FOR PARTNERS, STUDENTS AND MORE. BUT AT ITS CORE, RE:INVENT IS A LEARNING CONFERENCE REALLY TO DEDICATED BUILDERS AND SPECIFICALLY TO DEVELOPERS. IN FACT, ONE OF THE FIRST THINGS THAT I DID WHEN I TOOK OVER THIS ROLE WAS TO SPEND A LITTLE BIT OF TIME WITH OUR AWS HEROES. [APPLAUSE] A SHOUT OUT TO THE HEROES WHO I BELIEVE ARE SITTING OVER HERE. I CAN HEAR THEM ALREADY. OUR

HEROES ARE SOME OF OUR MOST DEDICATED AND PASSIONATE AWS DEVELOPERS. THANK YOU SO MUCH. AND THE WAY THE WHOLE AWS DEVELOPER COMMUNITY HAS GROWN IS REALLY INCREDIBLE. WE NOW HAVE 600 USER GROUPS ALL AROUND THE WORLD, SPANNING 120 DIFFERENT COUNTRIES AND THE ENTIRE AWS COMMUNITY IS MANY MILLIONS ALL COUNTRIES AND THE ENTIRE AWS COMMUNITY IS MANY MILLIONS ALL AROUND THE WORLD. AND ONE OF THE GREAT THINGS ABOUT THAT

COMMUNITY IS WE GET FEEDBACK FROM YOU THAT GOES DIRECTLY INTO THE PRODUCTS AND INFORMS WHAT WE BUILD AND ANNOUNCE HERE TODAY. SO THANK YOU. NOW IN 2006 IS WHEN I FIRST STARTED AWS, AND AT AWS, WHEN WE LAUNCHED THE BUSINESS AND OUR VERY FIRST CUSTOMERS WERE STARTUPS. SO STARTUPS HAVE A REALLY SPECIAL PLACE IN MY HEART AND ONE OF THE THINGS THAT I LOVE ABOUT STARTUPS IS THEY ARE ANXIOUS TO USE NEW TECHNOLOGIES. THEY WILL JUMP IN AND THEY GIVE US GREAT FEEDBACK. THEY PUSH US TO

INNOVATE, THEY INNOVATE ON TOP OF US, AND THEY MOVE REALLY FAST. AND WE LEARN A TON FROM STARTUPS. NOW WITH GENERATIVE AI, THERE IS NEVER A MORE EXCITING TIME OUT THERE IN THE WORLD TO BE A STARTUP. GENERATIVE AI HAS THE POTENTIAL TO DISRUPT EVERY SINGLE INDUSTRY OUT THERE, AND WHEN YOU LOOK AT DISRUPTORS, DISRUPTION COMES FROM STARTUPS. AND SO IT'S A FANTASTIC TIME IF YOU'RE A STARTUP, TO REALLY BE THINKING ABOUT HOW YOU DISRUPT INDUSTRIES. IN FACT, WE SUPPORT

STARTUPS SO MUCH. I'M EXCITED TO ANNOUNCE THAT IN 2025, AWS WILL PROVIDE $1 BILLION IN CREDITS TO STARTUPS GLOBALLY. AS WE CONTINUE TO INVEST IN YOUR SUCCESS.

[APPLAUSE] NOW, WHILE STARTUPS WERE OUR FIRST CUSTOMERS, IT ACTUALLY DIDN'T TAKE LONG FOR ENTERPRISES TO CATCH ON. AND SO ENTERPRISES QUICKLY REALIZED THERE'S A TON OF VALUE IN THE CLOUD. AND TODAY, THE LARGEST ENTERPRISES ACROSS EVERY SINGLE INDUSTRY IN THE WORLD, EVERY SINGLE VERTICAL, EVERY SINGLE SIZE, EVERY SINGLE GOVERNMENT ARE RUNNING ON AWS. MILLIONS OF CUSTOMERS RUNNING IN EVERY IMAGINABLE USE CASE. BUT IT WASN'T ACTUALLY ALWAYS THIS WAY. SO ONE STORY FROM EARLY ON I'D LIKE TO SHARE. IS IT WAS EARLY ON IN OUR AWS JOURNEY, AND WE TOOK A TRIP TO NEW YORK TO VISIT SOME OF THE BANKS, AND THEY WERE REALLY INTERESTED IN WHAT THIS WHOLE CLOUD COMPUTING THING WAS. AND SO WE SAT DOWN WITH THEM AND

THEY WERE THEY WERE VERY CURIOUS. AND WE OUTLINED OUR VISION FOR HOW CLOUD COMPUTING COULD CHANGE HOW THEY DO RUN THEIR IT AND RUN THEIR TECHNOLOGY. AND THEY TOLD US, THEY SAID, YOU KNOW WHAT? I THINK IT'S VERY UNLIKELY THAT ANY OF OUR PRODUCTION WORKLOADS ARE EVER GOING TO RUN IN THE CLOUD. AND THEY SAT DOWN AND THEY WERE VERY DILIGENT, AND THEY GAVE US A WHOLE LIST OF REASONS. THERE'S COMPLIANCE, THERE'S AUDIT, THERE'S REGULATORY, THERE'S SECURITY, THERE'S ENCRYPTION. ALL OF THESE THINGS WERE REASONS WHY THEY WOULD LOVE TO. IT'S A COMPELLING TECHNOLOGY, BUT PROBABLY NEVER

GOING TO RUN IN THE CLOUD. SO IT WOULD HAVE BEEN SUPER EASY FOR TO US SAY, OKAY, GUESS THAT'S NOT WORTH OUR TIME AND GO TO THE NEXT CUSTOMER. BUT THAT'S NOT WHAT WE DID. AND ACTUALLY, I'D LOVE TO THANK THOSE FINANCIAL SERVICES CUSTOMERS THAT WE SAT DOWN WITH BECAUSE THEY REALLY HELPED US UNDERSTAND WHAT IT WOULD TAKE TO SUPPORT LARGE, REGULATED CUSTOMERS INSIDE OF THE CLOUD. AND AS AWS, WE REALLY WANTED TO SUPPORT EVERY CUSTOMER. SO WE SPENT THE BETTER PART OF THE NEXT DECADE TICKING EVERY SINGLE THING OFF OF THOSE LISTS. AND I'M PROUD TO SAY THAT

EVERY SINGLE THING OFF OF THOSE LISTS. AND I'M PROUD TO SAY THAT MANY OF THOSE LARGE FINANCIAL COMPANIES ARE CUSTOMERS OF OURS TODAY. AND SO, YOU KNOW, WHEN YOU'RE INNOVATING, ONE OF THE IMPORTANT THINGS TO REMEMBER IS YOU REALLY WANT TO START WITH THE CUSTOMER. YOU WANT TO ASK THEM WHAT'S IMPORTANT TO THEM, BUT THEN YOU DON'T JUST ALWAYS JUST DELIVER WHAT THE CUSTOMER ASKS FOR. YOU WANT TO INVENT ON THEIR BEHALF. WE CALL THIS THE AMAZON IS STARTING WITH THE CUSTOMER AND WORKING BACKWARDS, LISTENING TO THEM UNDERSTANDING WHAT THEY WANT, AND THEN WORK BACKWARDS TO WHAT'S A FANTASTIC PRODUCT AND THAT CUSTOMER FOCUS AND WORKING BACKWARDS IS REALLY PART OF OUR AWS, DNA, AND IT'S HOW WE APPROACH THE BUSINESS FROM THE VERY BEGINNING. IN

FACT, THE ORIGINAL AWS VISION DOCUMENT THAT WE WROTE WAS WRITTEN IN 2003, AND AT THE TIME, THERE WAS A LOT OF TECHNOLOGY COMPANIES THAT WERE BUILDING THESE BUNDLED SOLUTIONS THAT WOULD TRY TO DO EVERYTHING FOR YOU AND WHAT THEY ENDED UP DOING IS THERE WERE THESE BIG MONOLITHIC SOLUTIONS THAT WOULD DO EVERYTHING FINE. IT WAS GOOD ENOUGH, AND WE HAD THIS OBSERVATION THAT GOOD ENOUGH SHOULDN'T BE ALL YOU STRIVE FOR. YOU WANT THE BEST POSSIBLE COMPONENT. IT WOULD BE GREAT IF

YOU COULD TAKE THE LITTLE BIT OF EVERYTHING THAT WAS BEST AND COMBINE THEM TOGETHER. AND WITH THAT IDEA, AWS WAS BORN AND WE MADE THIS OTHER OBSERVATION. YOU COULD TAKE ALMOST ANY APPLICATION OUT THERE AND YOU COULD BREAK IT DOWN INTO INDIVIDUAL COMPONENTS. IT COULD BE DECOMPOSED INTO THESE CORE SERVICES. AND WE CALL THESE CORE SERVICES BUILDING BLOCKS. AND THIS IDEA WAS THAT IF YOU HAD THESE SERVICES THAT WERE THE BEST IN THE WORLD AT DOING THIS ONE SPECIFIC THING, AND THEY DID THAT JOB REALLY, REALLY WELL, AND YOU MADE IT EASY TO COMBINE A BUNCH OF THESE SERVICES TOGETHER IN NEW AND UNIQUE WAYS.

THEN PEOPLE COULD REALLY BUILD INTERESTING THINGS, AND YOU'D HAVE A BETTER MODEL FOR HOW TO CONSUME TECHNOLOGY AND HOW TO BUILD COMPANIES. NOW, THIS BUILDING BLOCK CONCEPT HAS BEEN FUNDAMENTAL TO HOW WE BUILD AWS SERVICES OVER THE LAST 18 YEARS, AND TODAY WE HAVE HUNDREDS OF AWS SERVICES THAT YOU ALL COMBINED IN UNIQUE AND DIFFERENT, INTERESTING AND INCREDIBLE WAYS. SO ONE OF THE THINGS I THOUGHT WOULD BE PRETTY FUN IS THROUGHOUT THE DAY TODAY IS TO HEAR FROM SOME STARTUP FOUNDERS WHO ARE TAKING THESE BUILDING BLOCKS AND COMBINING THEM IN COOL AND INTERESTING WAYS TO REDEFINE HOW PROBLEMS GET SOLVED IN THEIR SPACE. LET'S HEAR FROM THE FIRST ONE OF THESE FOUNDERS. NOW. [MUSIC]

PROTEINS HAVE THE POTENTIAL TO SOLVE SOME OF THE MOST IMPORTANT CHALLENGES FACING OUR SOCIETIES. MANY IMPORTANT DRUGS OR PROTEINS. EVOLUTIONARY SCALE DEVELOPS AI FOR THE LIFE SCIENCES. WE HAVE DEVELOPED THE ESM FAMILY OF MODELS WHICH ARE USED BY SCIENTISTS AROUND THE WORLD TO UNDERSTAND AND DESIGN PROTEINS. [MUSIC] AS SOON AS WE FOUNDED THE COMPANY, OUR FIRST COMPUTE WAS ON AWS AND WE WERE EARLY USERS OF HYPERPOD. THIS ALLOWED TO US DO THE ITERATIVE DEVELOPMENT

THAT WAS CRITICAL FOR CREATING ESM3. ESM3 IS A MODEL THAT REASONS OVER THE SEQUENCE STRUCTURE AND FUNCTION OF PROTEINS. IT'S BEEN TRAINED ON A TRILLION TERAFLOPS OF COMPUTE ON OVER 2 BILLION PROTEIN SEQUENCES THROUGH THIS TRAINING, IT'S DEVELOPED A VERY DEEP UNDERSTANDING OF THE BIOLOGY OF PROTEINS THAT ALLOWS SCIENTISTS TO THEN GO IN AND PROMPT THE MODEL AND BE ABLE TO CREATE NEW PROTEINS. WHAT THIS MEANS IS

THAT WE COULD ENGINEER PROTEINS THE WAY THAT WE ENGINEER OTHER SYSTEMS, LIKE MICROCHIPS AND BUILDINGS. AND SO ESM3 IS A STEP TOWARD MAKING BIOLOGY PROGRAMABLE TO BRING TOGETHER THE FRONTIER TOOLS THAT EVOLUTIONARY SCALE IS DEVELOPING WITH THE SECURITY AND ORCHESTRATION THAT AWS BRINGS. WE COULD HAVE A SOLUTION THAT COULD SCALE IN THE PHARMACEUTICAL INDUSTRY AS WE CONTINUE TO SCALE THESE MODELS UP, THEY'LL DEVELOP A DEEPER AND DEEPER UNDERSTANDING OF BIOLOGY AND HELP US TO UNDERSTAND THE NATURAL WORLD AND LIFE IN A NEW WAY. [APPLAUSE] IT'S ACTUALLY MIND BLOWING AND REALLY INCREDIBLE WHAT THE TEAM AT EVOLUTIONARY SCALE IS BUILDING. REALLY COOL. THANK

YOU. AND NOW THERE ARE A LOT OF REASONS WHY CUSTOMERS CHOOSE AWS AND BUT THERE IS ONE THING THAT ALMOST EVERYBODY REALLY CARES A LOT ABOUT, FROM STARTUPS TO ENTERPRISES TO GOVERNMENTS. AND WHAT THAT IS, IS SECURITY. NOW, WHEN WE STARTED AWS, WE UNDERSTOOD FROM DAY ONE THAT SECURITY WAS GOING TO HAVE TO BE OUR TOP PRIORITY. WE KNEW THAT YOU ALL WERE TRUSTING US WITH

YOUR DATA. YOU'RE BUILDING YOUR BUSINESS ON TOP OF US, AND THAT'S A RESPONSIBILITY THAT WE TOOK REALLY SERIOUSLY. AND SO FOR US, WE KNEW THAT THAT SECURITY HAD TO BE THE FOUNDATION OF WHAT WE BUILT THAT BUSINESS ON. AND SO FOR US, PART OF WHAT WE UNDERSTOOD IS THAT SECURITY IS PART OF YOUR CULTURE, RIGHT? IT'S SOMETHING THAT YOU CAN'T BOLT ON AFTER THE FACT, RIGHT? IT'S NOT SOMETHING THAT YOU CAN GO LAUNCH AND THEN ADD SECURITY. YOU HAVE TO DO IT FROM THE BEGINNING. AND SO FOR US, SECURITY MATTERS IN EVERY SINGLE THING THAT WE DO. IT

MATTERS IN HOW WE DESIGN OUR DATA CENTERS. IT MATTERS IN HOW WE DESIGN OUR SILICON. IT MATTERS IN HOW WE DESIGN OUR VIRTUALIZATION STACK IN OUR SERVICE ARCHITECTURES AND MAYBE MOST IMPORTANTLY, IN ALL OF OUR SOFTWARE DEVELOPMENT PRACTICES, WHERE SECURITY HAS TO BE FRONT AND CENTER FROM THE VERY BEGINNING, FROM THE DESIGN PHASE TO THE IMPLEMENTATION PHASE TO THE DEPLOYMENT PHASE, THE PATCHING, ETC. SUPER IMPORTANT. EVERYTHING STARTS WITH SECURITY, TOP OF MIND AND WITH AWS.

SECURITY IS ONE OF THE REASONS THAT SO MANY CUSTOMERS ARE TRUSTING AWS WITH THEIR CLOUD WORKLOADS. IT'S THIS CORE FOUNDATIONAL LAYER THAT ALL OF THE REST OF OUR SERVICES KIND OF BUILD ON TOP OF. ALL RIGHT, SPEAKING OF ONE OF THOSE BUILDING BLOCKS, LET'S JUMP IN AND TALK ABOUT THE FIRST ONE. COMPUTE. NOW, TODAY, AWS OFFERS MORE COMPUTE INSTANCES THAN ANY OTHER PROVIDER BY A LONG SHOT. AND IT ALL STARTED WITH EC2.

NOW, SOME OF YOU MIGHT KNOW I ACTUALLY USED TO LEAD THE EC2 TEAM FOR MANY YEARS NOW IN MY CURRENT ROLE. TECHNICALLY, I'M PROBABLY NOT ALLOWED TO SAY I HAVE FAVORITES, BUT I REALLY LOVE EC2. THE GOOD NEWS IS LOTS OF CUSTOMERS LOVE IT TOO. AND WHY? BECAUSE EC2 HAS MORE OPTIONS, MORE INSTANCES, AND MORE CAPABILITIES, AND IT HELPS YOU FIND THE EXACT RIGHT PERFORMANCE FOR THE APP OR FOR YOUR WORKLOAD THAT YOU NEED. IN FACT, AT THIS POINT, WE'VE GROWN TO WHERE EC2 HAS 850 DIFFERENT INSTANCE TYPES ACROSS 126 DIFFERENT FAMILIES. WHAT THAT MEANS IS YOU CAN ALWAYS FIND THE EXACT RIGHT INSTANCE TYPE FOR THE WORKLOAD THAT YOU NEED. SO IF YOU'RE RUNNING, SAY, A BIG DATABASE OR AN ANALYTICS WORKLOAD, WE HAVE THE LARGEST STORAGE INSTANCES THAT YOU CAN RUN ANYWHERE IN THE CLOUD. LET'S

SAY YOU HAVE AN APP THAT HAS IN-MEMORY REQUIREMENTS. WE HAVE THE INSTANCES WITH THE LARGEST FOOTPRINT ANYWHERE. FOR THOSE LATENCY SENSITIVE NEEDS THAT YOU HAVE. NOW, LET'S SAY YOU'RE RUNNING AN HPC CLUSTER OR A LARGE AI AND ML CLUSTER, AND YOU NEED REALLY FAST NETWORKING TO CONNECT ALL OF THOSE NODES TOGETHER. AWS HAS BY FAR THE FASTEST AND MOST SCALABLE TO NETWORK KEEP ALL OF YOUR HPC INSTANCES OR ML INSTANCES TOGETHER, AND WE WORK HARD TO ENSURE THAT YOU ALWAYS HAVE THE LATEST TECHNOLOGY, SO YOU ALWAYS HAVE INSTANCES WITH THE LATEST ADVANCEMENTS FROM INTEL OR AMD OR NVIDIA, AND AWS WAS THE FIRST AND CONTINUES TO BE THE ONLY CLOUD THAT OFFERS MAC BASED INSTANCES. SO ONE QUESTION THAT I GET FROM A LOT OF CUSTOMERS

IS, HOW DO YOU POSSIBLY SUSTAIN THAT LEVEL OF INNOVATION? WHAT ALLOWS YOU TO DELIVER SO MUCH? ONE OF THE ANSWERS TO THAT IS NITRO. NITRO IS OUR AWS VIRTUALIZATION SYSTEM. EFFECTIVELY, WHAT WE DID IS WE DESIGNED A CUSTOM CHIP THAT OFFLOADS ALL OF THE VIRTUALIZATION FOR NETWORKING, FOR STORAGE, AND FROM COMPUTE OFF INTO THIS SEPARATE NITRO CARD SYSTEM. YOU SEE UP HERE. WHAT THAT ALLOWS US TO DO IS IT ALLOWS US TO DELIVER BARE METAL PERFORMANCE. IT ALLOWS US TO DELIVER UNPARALLELED SECURITY AND ISOLATION FOR CUSTOMERS. THAT'S UNMATCHED ANYWHERE IN THE CLOUD. BUT AN INTERESTING OTHER

BENEFIT OF NITRO IS IT GIVES US A TON OF FLEXIBILITY. SEE, WHEN WE MOVE ALL OF THE VIRTUALIZATION OFF INTO THIS NITRO SYSTEM SEPARATELY, IT ALLOWS US TO INNOVATE INDEPENDENTLY. SO WE DON'T HAVE TO REDO THE VIRTUALIZATION STACK. EVERY TIME WE HAVE A NEW INSTANCE TYPE, WE CAN SIMPLY

DEVELOP A GREAT NEW SERVER AND DEVELOP THE VIRTUALIZATION STACK INDEPENDENTLY AND PUT THEM TOGETHER. THAT ALLOWS US TO MOVE MUCH FASTER. AND IT'S INTERESTING. NITRO HAS BEEN ONE OF THE KEYS THAT HAS REALLY UNLOCKED THE SPEED OF INNOVATION IN COMPUTE. IN AWS. THE OTHER INTERESTING THING ABOUT NITRO IS IT REALLY TAUGHT US ABOUT DEVELOPING CUSTOM SILICON, AND WITH THE SUCCESS OF NITRO, WE SAW WE CAN BE SUCCESSFUL AT BUILDING SILICON. AND SO WE LOOKED AT WHERE ELSE MAYBE WE

COULD APPLY THIS SKILL. NOW, IN 2018, WE SAW A TREND IN COMPUTE. WE WERE LOOKING OUT THERE AND WE SAW THAT ARM CORES WERE GETTING FASTER. MOST OF THEM WERE IN MOBILE, BUT THEY WERE GETTING MORE POWERFUL. AND WE HAD THIS IDEA THAT THERE'S THIS OPPORTUNITY THAT MAYBE WE COULD GO COMBINE THAT TECHNOLOGY CURVE WITH OUR KNOWLEDGE OF WHAT'S MOST IMPORTANT TO CUSTOMERS RUNNING INSIDE OF AWS AND DEVELOP A CUSTOM, GENERAL PURPOSE PROCESSOR. NOW, AT THE TIME, THIS WAS A VERY

CONTROVERSIAL IDEA, RIGHT? BECAUSE THAT SEEMS CRAZY THAT WE WOULD GO DEVELOP OUR OWN SILICON, BUT WE WERE PRETTY CONVICTED THAT WE COULD DELIVER REALLY DIFFERENTIATED VALUE TO CUSTOMERS. AND SO WE DOVE IN AND WE LAUNCHED GRAVITON. FAST FORWARD TO TODAY. GRAVITON IS WIDELY USED BY ALMOST EVERY AWS CUSTOMER OUT THERE. GRAVITON DELIVERS 40% BETTER PRICE

PERFORMANCE THAN X86. IT USES 60% LESS ENERGY THAT IS FANTASTIC. SO YOU CAN BOTH REDUCE YOUR CARBON FOOTPRINT AND GET BETTER PRICE PERFORMANCE. AND GRAVITON IS GROWING LIKE CRAZY. LET'S PUT THIS INTO CONTEXT. IN 2019, ALL OF AWS WAS A $35 BILLION BUSINESS. TODAY, THERE'S AS MUCH GRAVITON RUNNING IN THE AWS FLEET AS ALL COMPUTE IN 2019. IT'S PRETTY IMPRESSIVE

GROWTH. A FEW MONTHS AGO, WE LAUNCHED GRAVITON4. GRAVITON FOUR IS THE MOST POWERFUL GRAVITON CHIP YET. AND NOW WITH THIS MORE POWERFUL CHIP, GRAVITON CAN ADDRESS A MUCH BROADER SET OF WORKLOADS, INCLUDING THINGS LIKE SCALE UP WORKLOADS LIKE DATABASES OR OTHER THINGS THAT NEED REALLY LARGE INSTANCES. GRAVITON4 DELIVERS 30% MORE POWER COMPUTE POWER PER CORE, BUT IT ALSO HAS THREE TIMES THE NUMBER OF VCPUS AS GRAVITON3, AS WELL AS THREE X THE MEMORY. SO YOU CAN GET MUCH LARGER INSTANCE SIZES. AND IT'S NOT JUST MORE INSTANCE SIZES AND BIGGER INSTANCE SIZES. GRAVITON IS REALLY HAVING A MATERIAL

IMPACT ON CUSTOMERS BUSINESSES. LET'S TAKE A LOOK AT SOMEBODY LIKE PINTEREST. NOW PINTEREST WAS RUNNING THOUSANDS OF X86 INSTANCES TO RUN THEIR BUSINESS, AND THEY MADE THE DECISION TO MOVE TO GRAVITON WHEN THEY MOVED. NOT ONLY DID THEY SEE THAT THEY HAD LOWER PRICING FOR GRAVITON, THEY ACTUALLY GOT ABSOLUTELY BETTER PERFORMANCE FROM EVERY INSTANCE. SO THEY'RE

ACTUALLY ABLE TO REDUCE THEIR FOOTPRINT. THIS MEANS THAT BECAUSE OF GRAVITON THEY REDUCED THEIR COMPUTE COSTS BY 47%. THERE IS NOT A LOT OF THINGS OUT THERE THAT YOU CAN DO THAT CAN REDUCE THE COST OF YOUR ENTIRE COMPUTE FOOTPRINT BY 47%. IT'S PRETTY INCREDIBLE. AND BECAUSE PINTEREST CARES A TON ABOUT HOW

ENERGY EFFICIENT THEY ARE, THEY CUT THEIR CARBON EMISSIONS BY 62%. AND THEY'RE NOT ALONE. 90% OF THE TOP 1000 EC2 CUSTOMERS HAVE ALL STARTED USING GRAVITON. IT'S AWESOME TO SEE SO MANY CUSTOMERS BENEFITING FROM THIS INNOVATION. [APPLAUSE] BUT I WILL TELL YOU, TIME DOES NOT STAND STILL. AND IF THERE HAS BEEN ONE CONSTANT IN MY TIME

AT AWS, IT'S THAT WORKLOADS ARE CONSTANTLY EVOLVING, WHICH MEANS CUSTOMERS ARE LOOKING TO US TO SAY, HOW CAN YOU COME UP WITH BETTER SOLUTIONS FOR THESE NEW WORKLOADS? AND TODAY, BY FAR THE BIGGEST COMPUTE PROBLEM OUT THERE INVOLVE AI AND SPECIFICALLY GENERATIVE AI. NOW, THE VAST MAJORITY OF GENERATIVE AI WORKLOADS TODAY RUN ON NVIDIA GPUS, AND AWS IS BY FAR THE BEST PLACE IN ANYWHERE IN THE WORLD TO RUN GPU WORKLOADS. PART OF THE REASON IS BECAUSE AWS AND NVIDIA HAVE BEEN COLLABORATING TOGETHER FOR 14 YEARS TO ENSURE THAT WE'RE REALLY GREAT AT ABOUT OPERATING AND RUNNING GPU WORKLOADS. JUST THIS LAST YEAR, WE LAUNCHED G6 INSTANCES. G6 IS P5 NS, ALL POWERED BY NVIDIA

TENSOR CORE GPUS NOW. IN FACT, BECAUSE AWS HAS THE BEST, MOST SCALABLE NETWORK AND HAS BY FAR THE MOST OPERATIONALLY EXCELLENT PLACE TO RUN LARGE WORKLOADS, NVIDIA ACTUALLY CHOSE AWS AS THE PLACE TO RUN THEIR OWN LARGE SCALE GENERATIVE AI CLUSTER. AND I'LL TELL YOU TODAY THAT WE'RE DOUBLING DOWN ON THAT PARTNERSHIP. SO TODAY I'M HAPPY TO ANNOUNCE THE P6 FAMILY OF INSTANCES. [APPLAUSE]

P6 INSTANCES WILL FEATURE THE NEW BLACKWELL CHIPS FROM NVIDIA, AND THEY'LL BE COMING EARLY NEXT YEAR. P6 INSTANCES WILL GIVE YOU UP TO 2.5 TIMES FASTER COMPUTE THAN THE CURRENT GENERATION OF GPUS. THAT IS FANTASTIC. WE EXPECT P6 TO BE INCREDIBLY

POPULAR ACROSS A BROAD RANGE OF GENERATIVE AI APPLICATIONS, AND SUPER EXCITED TO SEE WHAT YOU ALL START BUILDING WITH THEM EARLY NEXT YEAR WHEN THEY ANNOUNCE. NOW AT AWS, HOWEVER, WE ARE NEVER SATISFIED. AND SO WHEN WE BUILT GRAVITON, WE SAW THAT WE COULD BUILD A PROCESSOR THAT WAS ACTUALLY GETTING ADOPTED AND DELIVERING VALUE TO CUSTOMERS. AND SO WE LOOKED AND

WE THOUGHT, WHAT ELSE COULD WE DO? AND AS WE SAW OUT THERE IN THE COMPUTE WORLD, WE SAW ANOTHER TREND EMERGING, WHICH WE JUST TALKED ABOUT, WHICH IS ACTUALLY EVEN IN 2018, WE SAW THAT XCELERATOR INSTANCES AND ML DEEP LEARNING AT THE TIME WAS GOING TO BE A BIG TREND. WE MAYBE DIDN'T KNOW HOW BIG, BUT WE KNEW THAT THIS WAS GOING TO BE A BIG DEAL. AND WE SAID MAYBE WE COULD GO IN AND THINK ABOUT BUILDING AI CHIPS AS WELL, BECAUSE WE THOUGHT THAT THIS WAS GOING TO BE A BIG AREA IN THIS, IN THIS SPACE. AND SO BUT WE LOOKED AT IT AND THOSE CHIPS WERE GOING TO BE MUCH MORE COMPLICATED. WE KNEW THAT THAT WAS GOING TO BE HARDER, BUT WE ALSO KNEW THAT THAT WAS AN AREA THAT WE THOUGHT WE COULD BRING DIFFERENTIATED VALUE, MUCH LIKE WE DID WITH GRAVITON. AND SO WE DECIDED TO JUMP IN AND DO IT. THE FIRST CHIP WE LAUNCHED WAS IN 2019 WAS INFERENTIA, WHICH IS A REALLY GOOD FIRST STEP AND ALLOWED US TO REALLY LOWER THE COST FOR SMALL INFERENCE WORKLOADS IN PARTICULAR, ALEXA WAS ONE OF OUR FIRST BIG INFERENTIAL WORKLOADS, AND THEY WERE ABLE TO SAVE A TON OF MONEY MOVING TO FRIENDSHIP. THEN, IN

2022, WE LAUNCHED OUR VERY FIRST TRAINING CHIP CALLED TRAINIUM. ONE. NOW TRAINIUM HAD SOME REALLY GREAT EARLY ADOPTERS. ENTERPRISES LIKE BANCO, IBM, RICOH AND STARTUPS LIKE NINJA TECH AND ARCEE. AI. BUT WE KNEW THAT THE FIRST TRAINING CHIP WASN'T GOING TO BE PERFECT. WE ALSO KNEW IT WASN'T GOING TO BE A FIT FOR EVERY WORKLOAD. WE KNEW IF A WORKLOAD COULD WORK ON

TRAINIUM ONE, WE THOUGHT THAT CUSTOMERS COULD PROBABLY SAVE ABOUT 50% OF THEIR WORKLOADS, AND MANY OF THOSE EARLY ADOPTERS DID. BUT WE ALSO KNEW THAT IT WAS EARLY. THE SOFTWARE WAS EARLY, AND IT WAS, AND IT WASN'T GOING TO BE PERFECT FOR EVERY WORKLOAD. BUT WE SAW ENOUGH TRACTION, WE SAW ENOUGH INTEREST FROM CUSTOMERS AND ENOUGH SAVINGS THAT IT GAVE US CONFIDENCE THAT WE WERE ON THE RIGHT PATH. SO LAST YEAR AT RE:INVENT, THIS TIME, WE ANNOUNCED THAT WE WERE BUILDING TRAINIUM TWO, AND TODAY I'M EXCITED TO ANNOUNCE THE GA OF TRAINIUM TWO POWER GCN TWO INSTANCES.

[APPLAUSE] NOW, TRAINIUM TWO INSTANCES ARE OUR MOST POWERFUL INSTANCES FOR GENERATIVE AI, ALL BECAUSE OF THESE CUSTOM BUILT PROCESSORS COMPLETELY BUILT IN-HOUSE BY AWS. NOW TRAINIUM TWO DELIVERS 30 TO 40% BETTER PRICE PERFORMANCE THAN CURRENT GPU POWERED INSTANCES, 30 TO 40% BETTER. THAT IS PERFORMANCE THAT YOU CANNOT GET ANYWHERE ELSE. TRN2 INSTANCES HAVE 16 TRAINIUM TWO CHIPS THAT ARE ALL CONNECTED BY A HIGH BANDWIDTH, LOW LATENCY INTERCONNECT. WE CALL NEURAL LINK, AND A ONE TRAINIUM TWO INSTANCE WILL DELIVER 20.8 PETAFLOPS FROM A SINGLE COMPUTE NODE. REALLY COOL. THESE ARE

PURPOSE BUILT FOR THE DEMANDING WORKLOADS OF CUTTING EDGE GENERATIVE AI TRAINING AND INFERENCE. AGAIN, TRAINIUM TWO INSTANCES ARE GOING TO BE FANTASTIC FOR TRAINING AND INFERENCE. NAMING IS NOT ALWAYS PERFECT FOR US, BUT NOW TRAINIUM TWO ADDS A TON OF CHOICE FOR CUSTOMERS SO THAT THEY HAVE NOW MORE CHOICES AS THEY THINK ABOUT WHAT IS THE PERFECT INSTANCE FOR THE WORKLOAD THAT THEY'RE WORKING ON. AND EC2 HAS BY FAR THE MOST OPPORTUNITIES FOR DIFFERENT CHOICES. WE ACTUALLY WORKED WITH A COUPLE OF EARLY CUSTOMERS TO BETA TEST TRAINIUM TWO TO MAKE SURE THAT WE WERE ON THE RIGHT TRACK, AND WE SAW SOME PRETTY IMPRESSIVE EARLY RESULTS. ADOBE IS SEEING VERY PROMISING EARLY, PROMISING EARLY TESTING WHEN THEY STARTED RUNNING TRAINIUM TWO AGAINST THEIR FIREFLY INFERENCE MODEL, AND THEY EXPECT TO SAVE SIGNIFICANT AMOUNTS OF MONEY POOLSIDE IS A STARTUP WHO'S BUILDING A NEXT GENERATION SOFTWARE DEVELOPMENT PLAN, AND IS PLANNING TO TRAIN ALL FUTURE MODELS OF THEIR LARGE FRONTIER MODEL, ALL ON TRAINIUM TWO. POOLSIDE EXPECTS TO SAVE

40% OVER THE ALTERNATIVE OPTIONS. DATABRICKS IS ONE OF THE LARGEST DATA AND AI COMPANIES ANYWHERE IN THE WORLD, AND THEY PLAN TO USE TRAINIUM TWO TO DELIVER BETTER RESULTS AND LOWER THEIR TCO FOR OUR JOINT CUSTOMERS BY UP TO 30%. AND FINALLY, QUALCOMM IS REALLY EXCITED ABOUT USING TRAINIUM TWO TO DELIVER AI SYSTEMS THAT CAN TRAIN IN THE CLOUD AND THEN DEPLOY AT THE EDGE. SO WE'RE VERY EXCITED ABOUT THE EARLY RESULTS OF TRAINIUM TWO, AND VERY EXCITED FOR YOU ALL TO GET YOUR HANDS ON THEM AND TRY THEM. BUT I WILL SAY THAT WE DIDN'T STOP THERE. AS MANY OF YOU KNOW, SOME MODELS TODAY ARE ACTUALLY GETTING REALLY, REALLY LARGE WITH HUNDREDS OF BILLIONS. SOMETIMES

TRILLIONS OF PARAMETERS. AND THOSE MODELS ARE OFTEN TOO LARGE TO FIT ON A SINGLE SERVER. SO I'M EXCITED TO ANNOUNCE EC2 TRAINIUM TWO ULTRA SERVERS. [APPLAUSE]

EFFECTIVELY, AN ULTRA SERVER CONNECTS FOR TRAINIUM TWO INSTANCES, SO 64 TRAINIUM TWO CHIPS ALL INTERCONNECTED BY THAT HIGH SPEED, LOW LATENCY NEURALINK CONNECTIVITY. WHAT THIS DOES IS IT GIVES YOU A SINGLE ULTRA NODE WITH OVER 83 PETAFLOPS OF COMPUTE FROM THIS SINGLE COMPUTE NODE. THIS HAS A MASSIVE IMPACT ON LATENCY. SO NOW YOU CAN LOAD ONE OF THESE

REALLY LARGE MODELS ALL INTO A SINGLE NODE. AND DELIVER MUCH BETTER LATENCY, MUCH BETTER PERFORMANCE FOR CUSTOMERS WITHOUT HAVING TO BREAK IT UP OVER MULTIPLE NODES IN ADDITION, THIS HAS REALLY MATERIAL IMPACTS ON TRAINING CLUSTERS. BY HAVING THESE REALLY LARGE CLUSTERS, WE CAN ACTUALLY BUILD MUCH, MUCH LARGER TRAINING CLUSTERS FOR CUSTOMERS. NOW, ONE OF THE CUSTOMERS THAT WE'RE BUILDING, A NEXT GENERATION FRONTIER MODEL ON WITH TOGETHER IS ANTHROPIC. TOGETHER WITH ANTHROPIC, AWS IS BUILDING WHAT WE TOGETHER CALLED PROJECT RAINIER. PROJECT RAINIER

IS GOING TO BUILD A TRUST, A CLUSTER OF TRAINIUM TWO ULTRA SERVERS CONTAINING HUNDREDS OF THOUSANDS OF TRAINIUM TWO CHIPS. NOW, THIS CLUSTER IS ACTUALLY GOING TO BE FIVE TIMES THE NUMBER OF EXAFLOPS AS THE CURRENT CLUSTER THAT ANTHROPIC USED TO TRAIN THEIR LEADING SET OF CLAUDE MODELS THAT ARE OUT THERE IN THE WORLD, FIVE TIMES THE AMOUNT OF COMPUTE THAT THEY USED FOR THE CURRENT GENERATION. I AM SUPER EXCITED TO SEE WHAT THE ANTHROPIC TEAM COMES UP WITH WITH THAT SIZE CLUSTER. [APPLAUSE] IT'S FANTASTIC TO SEE HOW SOME OF THE MOST INNOVATIVE COMPANIES ANYWHERE IN THE WORLD ARE LEANING IN AND LEANING INTO TRAINIUM TWO FOR THEIR CUTTING EDGE AI EFFORTS. ONE OF THE BIGGEST, MOST INNOVATIVE COMPANIES IN THE WORLD IS APPLE. HERE TO TALK ABOUT HOW APPLE AND AWS ARE WORKING TOGETHER OVER A LONG TERM PARTNERSHIP TO ACCELERATE THE TRAINING AND INFERENCE THAT THEY USE TO BUILD THEIR UNIQUE FEATURES IN FOR THEIR CUSTOMERS IS BENOIT DUPIN, SENIOR DIRECTOR OF MACHINE LEARNING AND AI FROM APPLE.

WELCOME, BENOIT. [MUSIC] THANK YOU MATT. GOOD MORNING EVERYONE. [MUSIC] I'M HAPPY TO BE BACK TODAY AS A CUSTOMER, I SPENT MANY GREAT YEARS AT AMAZON WHERE I GOT TO LEAD PRODUCT SEARCH. TEN YEARS AGO I JOINED APPLE AND I KNOW OVERSEE OUR MACHINE LEARNING, AI AND SEARCH INFRASTRUCTURE FOR THE COMPANY. THIS INCLUDES OUR PLATFORM FOR MODEL TRAINING, FOUNDATION MODEL INFERENCE, AND OTHER SERVICES USED ACROSS SERIES SEARCH AND MORE. AT APPLE, WE FOCUS ON DELIVERING

SERIES SEARCH AND MORE. AT APPLE, WE FOCUS ON DELIVERING INFERENCE EXPERIENCES THAT ENRICH OUR USERS LIVES. WHAT MAKES THIS POSSIBLE IS HARDWARE, SOFTWARE AND SERVICES THAT COME TOGETHER TO CREATE UNIQUE EXPERIENCES FOR OUR USERS. MANY OF THESE EXPERIENCES COME AS PART OF THE DEVICE WE MAKE, AND SOME RUN IN THE CLOUD LIKE ICLOUD MUSIC, APPLE TV NEWS, APP STORE, SIRI, AND MANY MORE. ONE OF THE UNIQUE ELEMENTS ABOUT

APPLE BUSINESS IS THE SCALE AT WHICH WE OPERATE, AND THE SPEED WITH WHICH WE INNOVATE. AWS HAS BEEN ABLE TO KEEP THE PACE, AND WE HAVE BEEN CUSTOMERS FOR MORE THAN A DECADE. THEY CONSISTENTLY SUPPORT OUR DYNAMIC NEEDS AT SCALE AND GLOBALLY. AS WE HAVE

GROWN OUR EFFORTS AROUND MACHINE LEARNING AND AI, OUR USE OF AWS HAS GROWN RIGHT ALONGSIDE. WE APPRECIATE WORKING WITH AWS. WE HAVE A STRONG RELATIONSHIP AND THE INFRASTRUCTURE IS BOTH RELIABLE, PERFORMANT AND ABLE TO SERVE OUR CUSTOMERS WORLDWIDE. AND THERE ARE SO MANY SERVICES WE RELY ON, WE COULD NOT EVEN FIT THEM ALL ON THIS TINY SCREEN. AS AN EXAMPLE, WHEN WE NEEDED TO SCALE INFERENCE GLOBALLY FOR SEARCH, WE DID SO BY LEVERAGING AWS SERVICES IN MORE THAN TEN REGIONS, MORE RECENTLY, WE HAVE STARTED TO USE AWS SOLUTIONS WITH GRAVITON AND INFERENTIA FOR ML SERVICES LIKE APPROXIMATE NEAREST NEIGHBOR SEARCH AND OUR KEY VALUE STREAMING STORE. WE HAVE REALIZED OVER 40%

EFFICIENCY GAINS BY MIGRATING OUR AWS INSTANCES FROM X86 TO GRAVITON, AND WE HAVE BEEN ABLE TO EXECUTE SOME OF OUR SEARCH TEXT FEATURES TWICE AS EFFICIENTLY. AFTER MOVING FROM G4 INSTANCES TO INFERENTIA TWO. THIS YEAR HAS MARKED ONE OF OUR MOST AMBITIOUS YEARS FOR MACHINE LEARNING AND AI TO DATE, AS WE HAVE BUILT AND LAUNCHED APPLE INTELLIGENCE, APPLE INTELLIGENCE IS PERSONAL INTELLIGENCE. IT'S AN INCREDIBLE SET OF FEATURES

INTEGRATED ACROSS IPHONE, IPAD AND MAC THAT UNDERSTAND YOU AND HELP YOU WORK, COMMUNICATE, AND EXPRESS YOURSELF. APPLE INTELLIGENCE IS POWERED BY OUR OWN LARGE LANGUAGE MODELS. DIFFUSION MODELS, AND ADAPTS ON DEVICE AND SERVERS. FEATURES INCLUDE OUR SYSTEM WIDE WRITING TOOLS, NOTIFICATION SUMMARIES, IMPROVEMENT TO SIRI, AND MORE, INCLUDING MY FAVORITE GENMOJI'S. AND THESE ALL RUN IN A WAY THAT PROTECTS USER'S PRIVACY AT EVERY STEPS TO DEVELOP. APPLE INTELLIGENCE, WE NEEDED TO FURTHER SCALE OUR INFRASTRUCTURE FOR TRAINING TO SUPPORT THIS INNOVATION, WE NEEDED ACCESS TO A LARGE AMOUNT OF THE MOST PERFORMANT ACCELERATORS. AND AGAIN, AWS HAS BEEN RIGHT THERE

ALONGSIDE US AS WE'VE SCALED. WE WORK WITH AWS SERVICES ACROSS VIRTUALLY ALL PHASES OF OUR AI AND ML LIFECYCLE. KEY AREAS WHERE WE LEVERAGE AWS INCLUDE FINE TUNING OUR MODELS. THE POST-TRAINING OPTIMIZATION WHERE WE DISTILL OUR MODELS TO FIT ON DEVICE, AND BUILDING AND FINALIZING OUR APPLE INTELLIGENCE ADAPTERS. READY TO DEPLOY ON APPLE DEVICES AND

SERVERS. AS WE CONTINUE TO EXPAND THE CAPABILITIES AND FEATURES OF APPLE INTELLIGENCE, WE WILL CONTINUE TO DEPEND ON THE SCALABLE, EFFICIENT, AND HIGH PERFORMING XCELERATOR TECHNOLOGIES THAT AWS DELIVERS. LIKE MATT MENTIONED, TRAINIUM TWO IS JUST BECOMING GA. WE'RE IN THE EARLY STAGES OF

EVALUATING TRAINIUM TWO, AND WE EXPECT FROM EARLY NUMBERS TO GAIN UP TO 50% IMPROVEMENT IN EFFICIENCY IN PRE-TRAINING. WITH AWS, WE FOUND THAT WORKING CLOSELY TOGETHER AND TAKING ADVANTAGE OF THE LATEST TECHNOLOGIES HAS HELPED US BE MORE EFFICIENT IN THE CLOUD. AWS EXPERTISE, GUIDANCE AND SERVICES HAVE BEEN INSTRUMENTAL IN SUPPORTING OUR SCALE AND GROWTH AND MOST IMPORTANTLY, IN DELIVERING INCREDIBLE EXPERIENCES FOR OUR USERS. THANK YOU SO MUCH.

[APPLAUSE] [MUSIC] ALL RIGHT. THANKS A LOT, BENOIT. WE REALLY APPRECIATE THE LONGTIME PARTNERSHIP TOGETHER AND WE'RE EXCITED ABOUT ALL OF THOSE SUPER USEFUL FEATURES THAT YOU'RE DELIVERING THAT WE CAN ALL TAKE ADVANTAGE OF, CAN'T WAIT TO SEE WHAT YOU COME UP WITH. AND USING TRAINIUM TWO. ALL RIGHT, NOW, WHILE WE'RE REALLY EXCITED AS YOU CAN TELL BY ANNOUNCING THE GA OF TRAINIUM TWO TODAY, IT TURNS OUT THAT THE GENERATIVE AI SPACE IS MOVING AT LIGHTNING SPEED. AND SO WE'RE NOT SLOWING DOWN EITHER. WE ARE COMMITTED TO DELIVERING ON THE VISION OF TRAINIUM LONG TERM. WE KNOW THAT WE HAVE TO KEEP UP WITH THE EVOLVING NEEDS OF GENERATIVE AI AND THE ENTIRE LANDSCAPE THAT YOU ALL NEED FROM US AND YOUR INSTANCES, WHICH IS WHY TODAY I'M EXCITED TO ALSO ANNOUNCE THE NEXT LEAP FORWARD. TODAY WE'RE

ANNOUNCING TRAINIUM THREE, COMING LATER NEXT YEAR. [APPLAUSE] TRAINIUM THREE WILL BE OUR FIRST CHIP THAT AWS MAKES ON THE THREE NANOMETER PROCESS, AND IT'LL GIVE YOU TWO X MORE COMPUTE THAN YOU GET FROM TRAINIUM TWO. IT'LL ALSO BE 40% MORE EFFICIENT, WHICH IS GREAT AS WELL. SO IT'LL

ALLOW YOU ALL TO BUILD BIGGER, FASTER, MORE EXCITING GEN AI APPLICATIONS. MORE INSTANCES, MORE CAPABILITIES. MORE. MORE COMPUTE THAN ANY OTHER CLOUD OF SILICON INNOVATIONS AND NITRO AND GRAVITON AND TRAINIUM. WE HAVE HIGH PERFORMANCE NETWORKING TO MAKE SURE THAT YOU CAN REALLY THE NETWORK DOESN'T GET IN YOUR WAY. AND THIS IS ALL WHY ON AVERAGE, EVERY SINGLE DAY, 130 MILLION NEW EC2 INSTANCES ARE LAUNCHED EVERY DAY. PRETTY INCREDIBLE. NOW, EVERY DAY WE CONTINUE TO RE:INVENT WHAT

COMPUTE MEANS IN THE CLOUD. BUT OF COURSE, YOUR APPLICATIONS DON'T STOP WITH COMPUTE. SO LET'S MOVE ON TO OUR NEXT BUILDING BLOCK. AND THAT'S STORAGE. NOW, IF EVERY APPLICATION NEEDS COMPUTE BECAUSE IT PROVIDES ALL THAT PROCESSING POWER OF COURSE EVERY APPLICATION ALSO NEEDS STORAGE BECAUSE THAT'S WHERE YOUR DATA LIVES. NOW, I KNOW IT'S A LONG

TIME AGO, BUT SOME OF YOU MAY ACTUALLY REMEMBER WHAT STORAGE USED TO BE LIKE BEFORE AWS. YOU WOULD BE YOU WOULD HAVE A ROOM AND YOU JUST HAVE ALL THESE BOXES, STORAGE BOXES, AND YOU'D FILL UP A BOX AND YOU'D HAVE TO HAVE ANOTHER ONE, AND THEN YOU'D HAVE TO HAVE ANOTHER ONE. THEN YOU'D HAVE TO GO BACK TO THE FIRST ONE, BECAUSE YOU HAVE TO REPLACE SOME DISKS, AND IT WAS REALLY HARD TO MANAGE. AND IN TODAY'S WORLD IT WOULD ACTUALLY

BE ALMOST IMPOSSIBLE TO KEEP UP WITH THE SCALE THAT YOU ALL HAVE OF YOUR DATA IN. BACK IN 2006, WE ENVISIONED A BETTER WAY WE COULD JUST PROVIDE SIMPLE, DURABLE, HIGHLY SCALABLE, SECURE STORAGE THAT VIRTUALLY COULD SCALE WITH ANY APPLICATION. AND SO WE LAUNCHED S3. IT'S OUR VERY FIRST SERVICE THAT WE LAUNCHED

IN 2006, AND IT FUNDAMENTALLY CHANGED HOW PEOPLE MANAGE DATA BUILT FROM THE GROUND UP TO HANDLE EXPLOSIVE GROWTH OVER THE LAST 18 YEARS. S3 NOW STORES OVER 400 TRILLION OBJECTS. THAT IS JUST INCREDIBLE. IT'S REALLY HARD TO GET YOUR HEAD AROUND WHAT THAT IS. HERE'S ONE INTERESTING THING. TEN YEARS

AGO, WE HAD FEWER THAN 100 CUSTOMERS THAT STORED A PETABYTE, A PETABYTES, A LOT OF STORAGE. WE HAD FEWER THAN 100 CUSTOMERS THAT STORED A PETABYTE OF STORAGE INSIDE OF S3. TODAY WE HAVE THOUSANDS OF CUSTOMERS, ALL STORING MORE THAN A PETABYTE AND SEVERAL CUSTOMERS THAT ARE STORING MORE THAN AN EXABYTE. AND THIS SCALING IS SOMETHING THAT CUSTOMERS TODAY JUST TAKE FOR GRANTED. IT'S NOT JUST SCALING THOUGH.

DATA TODAY IS JUST EXPLODING. AND THE SCALING PART YOU LARGELY ASSUME S3 TAKES CARE OF IT FOR YOU. THE NEXT THING HOWEVER MANY OF YOU DO HAVE TO WORRY ABOUT IS COST. BUT IT TURNS OUT WE MAKE THAT EASIER FOR YOU TO. NO ONE GIVES YOU

MORE OPTIONS TO BALANCE THE PERFORMANCE YOU NEED FROM YOUR STORAGE. TOGETHER WITH THE COST. WE HAVE A TON OF SKUS TO HELP YOU WITH THIS. THINGS LIKE S3 STANDARD, WHICH IS HIGHLY DURABLE, AND IT'S GOOD FOR THE MAJORITY OF WORKLOADS THAT YOU REGULARLY ACCESS. BUT WHEN YOU HAVE OBJECTS THAT YOU DON'T ACCESS THAT FREQUENTLY, WE HAVE S3 INFREQUENT ACCESS THAT ALLOWS YOU TO LOWER YOUR COSTS. WE HAVE THINGS LIKE S3 GLACIER, WHICH CAN FURTHER REDUCE COSTS BY UP TO 95% FOR OBJECTS LIKE BACKUP AND ARCHIVE THAT YOU DON'T NEED TO ACCESS VERY MUCH AT ALL. NOW,

CUSTOMERS HAVE TOLD US THAT THEY LOVE HAVING ALL OF THESE DIFFERENT SKUS AND ALL OF THESE DIFFERENT STOPS THAT HELP THEM BALANCE COST AND PERFORMANCE, BUT IT'S ALSO A LOT OF WORK CAN MAKE IT COMPLEX TO FIGURE OUT IF YOU SHOULD USE THIS SKEW OR THIS SKEW. AND SO WE DECIDED TO MAKE THAT EASIER. A COUPLE YEARS AGO WE LAUNCHED S3 INTELLIGENT-TIERING. WHAT S3 INTELLIGENT-TIERING DOES IS IT ANALYZES THE ACCESS PATTERNS FOR YOUR STORAGE, AND IT AUTOMATICALLY MOVES YOUR DATA TO THE RIGHT TIER. SINCE WE LAUNCHED S3 INTELLIGENT-TIERING WITHOUT HAVING TO DO ANYTHING, CUSTOMERS HAVE SAVED OVER $4 BILLION WITH ZERO ADDITIONAL WORK ON YOUR PART. IT'S PRETTY

AWESOME. AND WHAT'S REALLY POWERFUL, RIGHT, IS WHEN YOU CAN. AND THAT IS IT'S AN INTERESTING OBSERVATION. IT'S SUPER POWERFUL WHEN YOU CAN COMPLETELY ELIMINATE ALL OF THAT COMPLEXITY AND JUST FOCUS ON GROWING YOUR BUSINESS. AND SO THEN WHEN YOU'RE MANAGING GIGABYTES OF DATA OR PETABYTES OF DATA OR EVEN EXABYTES OF DATA, S3 CAN HANDLE IT FOR YOU. AND PART OF THAT BENEFITS OF THAT SCALE, THE PERFORMANCE THAT COST AND EASE OF USE AND ADVANCED CAPABILITIES IS WHY S3 IS UNDERLYING MORE THAN A MILLION DATA LAKES ALL AROUND THE WORLD. NOW. DATA DATA LAKES SUPPORT LARGE ANALYTICS

WORKLOADS. THINGS LIKE FINANCIAL MODELING, THINGS LIKE REAL TIME ADVERTISING AND AI WORKLOADS AND S3 OVER THE YEARS HAS DELIVERED A NUMBER OF DATA. LAKE INNOVATIONS. THEY'VE GIVEN TRANSLATE TRANSACTIONS PER SECOND INCREASES SO YOU CAN SUPPORT FASTER ANALYTICS. THEY ADDED SUPPORT FOR STRONG CONSISTENCY. THEY ADDED LOWER LATENCY SKUS SO THAT YOU COULD GET QUICKER ACCESS IN THE CLOUD. AND OFTENTIMES WHEN I TALK TO

CUSTOMERS, IT'S FUNNY. YOU'LL SAY, WHAT IS IT THAT YOU LIKE BEST ABOUT S3? AND WHAT THEY'LL TELL ME IS, YOU KNOW WHAT I LIKE BEST ABOUT S3? S3 JUST WORKS. AND THAT IS WHY WE TAKE THAT AS QUITE A COMPLIMENT. BUT AS YOU KNOW, AWS IS NEVER SATISFIED. AND SO THE S3 TEAM STEPPED BACK AND SAID, HOW CAN WE MAKE S3 WORK BETTER? AND SO THEY THOUGHT ABOUT HOW CAN WE MAKE S3 IMPROVED FOR SUPPORTING LARGE ANALYTICS AND AI USE CASES. AND

FIRST, ACTUALLY, LET'S TAKE A STEP BACK AND UNDERSTAND A LITTLE BIT ABOUT HOW YOU THINK ABOUT YOUR ANALYTICS DATA. IT'S FIRST HELPED TO UNDERSTAND HOW IT'S ORGANIZED. SO MOST ANALYTICS DATA IS ACTUALLY ORGANIZED IN TABULAR FORM OKAY. IT'S A HIGHLY EFFICIENT WAY TO WORK WITH A LOT OF DIFFERENT DATA THAT YOU WANT TO QUERY. AND APACHE PARQUET HAS EFFECTIVELY BECOME THE DE FACTO OPEN STANDARD FOR HOW YOU STORE TABULAR DATA IN THE CLOUD. AND MOST OF THAT IS STORED IN S3. IN

FACT, BECAUSE IT'S SUCH A GOOD FIT FOR DATA LAKES, PARQUET IS ACTUALLY ONE OF THE FASTEST GROWING DATA LAKE OR DATA TYPES IN ALL OF S3. NOW, WHEN YOU HAVE A BUNCH OF THESE PARQUET FILES AND MANY CUSTOMERS HAVE MILLIONS, ACTUALLY SOME CUSTOMERS IN AWS HAVE BILLIONS OF PARQUET FILES THAT THEY STORE. YOU WANT TO DO THINGS LIKE QUERY ACROSS THEM, AND SO YOU NEED A FILE STRUCTURE TO SUPPORT THAT. AND TODAY MOST PEOPLE USE APACHE ICEBERG TO SUPPORT THIS. ICEBERG IS A AS MANY OF YOU ALL KNOW, IS AN OPEN SOURCE, HIGHLY PERFORMANT FORMAT THAT ALLOWS YOU TO WORK ACROSS ALL OF THESE VARIOUS FILE FORMATS AND PARQUET FILES THAT YOU HAVE AND ENABLES SOME REALLY USEFUL THINGS. IT ENABLES SQL ACCESS ACROSS THIS BROAD DATA LAKE, SO YOU CAN HAVE DIFFERENT PEOPLE IN YOUR ORGANIZATION USING VARIOUS DIFFERENT ANALYTICS TOOLS. MAYBE THEY'RE

USING SPARK OR FLINK OR WHATEVER, AND THEY CAN ALL SAFELY WORK ON THE DATA WITHOUT HAVING TO WORRY ABOUT MESSING UP EACH OTHER'S WORKLOADS. ICEBERG IS A SUPER USEFUL OPEN SOURCE CONSTRUCT FOR THIS, AND BUT IT ENABLES A LOT OF THESE CAPABILITIES. A LOT OF CUSTOMERS WILL TELL YOU THAT AS MANY OPEN SOURCE PROJECTS ARE. ICEBERG IS ACTUALLY REALLY CHALLENGING TO MANAGE, PARTICULARLY AT SCALE. IT'S HARD TO MANAGE THE

PERFORMANCE. IT'S HARD TO MANAGE THE SCALABILITY. IT'S HARD TO MANAGE THE SECURITY. AND SO WHAT HAPPENS IS MOST OF YOU ALL OUT THERE HIRE DEDICATED TEAMS TO DO THIS. YOU DO THINGS TO TAKE CARE OF THINGS LIKE TABLE MAINTENANCE. YOU YOU WORRY ABOUT DATA COMPACTION. YOU WORRY ABOUT ACCESS CONTROLS. ALL OF THESE THINGS THAT MANAGE THAT. YOU GO INTO MANAGING AND TRYING TO GET

BETTER PERFORMANCE OUT OF YOUR ICEBERG IMPLEMENTATIONS. SO WE ASKED THE QUESTION, WHAT IF S3 COULD JUST DO THIS FOR YOU? WHAT IF WE COULD JUST DO IT AUTOMATICALLY? WELL, I AM THRILLED TO ANNOUNCE THE LAUNCH OF A NEW S3 TYPE S3 TABLE BUCKETS. [APPLAUSE] AND S3 TABLE BUCKETS. THIS IS S3 TABLES. IT'S A NEW BUCKET TYPE SPECIFICALLY FOR ICEBERG TABLES. AND WHAT THIS DOES IS WE

BASICALLY IMPROVE THE PERFORMANCE AND SCALABILITY OF ALL OF YOUR ICEBERG TABLES. IF YOU STORE ALL YOUR PARQUET FILES INTO AN ICEBERG, INTO ONE OF THESE S3 TABLE BUCKETS, YOU GET THREE X BETTER QUERY PERFORMANCE. YOU GET TEN TIMES HIGHER TRANSACTIONS PER SECOND COMPARED TO STORING THESE ICEBERG TABLES IN A GENERAL PURPOSE S3 BUCKET THAT IS MASSIVE PERFORMANCE FOR REALLY DOING NO ADDITIONAL WORK. NOW HOW THIS WORKS IS S3. DOES THIS WORK FOR YOU? YOU PUT IT HERE

AND WE'LL AUTOMATICALLY HANDLE ALL THE TABLE MAINTENANCE EVENTS, THINGS LIKE COMPACTION, THINGS LIKE SNAPSHOT MANAGEMENT, ALL OF THAT UNDIFFERENTIATED THINGS. WE'LL REMOVE UNREFERENCED FILES TO HELP MANAGE THE SIZE AND ALL OF THOSE THINGS ARE GOING TO BE CONTINUALLY OPTIMIZED. WE'RE CONTINUALLY OPTIMIZED THAT QUERY PERFORMANCE FOR YOU AND THE COST AS YOUR DATA LAKE SCALES. SO IT'S PRETTY LIKE THIS IS A FANTASTIC THING WHERE S3 IS COMPLETELY REINVENTING OBJECT STORAGE. SPECIFICALLY FOR THE DATA LAKE WORLD TO DELIVER BETTER PERFORMANCE, BETTER COST, AND BETTER SCALE. I THINK THIS IS A GAME CHANGER FOR DATA LAKE PERFORMANCE, BUT PERFORMANCE IS ACTUALLY ONLY A SMALL PART OF THE EQUATION. YOU ALL KNOW AS

YOUR DATA VOLUME SCALES, IT ACTUALLY GETS HARDER AND HARDER TO FIND THE DATA YOU'RE LOOKING FOR. AND SO AS YOU GET REALLY LARGE, AS YOU GET PETABYTES OF DATA, METADATA BECOMES REALLY IMPORTANT. METADATA IS THIS INFORMATION THAT HELPS YOU ORGANIZE AND UNDERSTAND INFORMATION ABOUT THE OBJECTS THAT YOU STORE IN S3. SO YOU CAN REALLY FIND WHAT YOU'RE LOOKING FOR, WHETHER YOU HAVE PETABYTES OR EXABYTES OF DATA. THAT

METADATA HELPS, BUT YOU HAVE TO HAVE A WAY TO LOOK AT IT. I'LL JUST USE AN EXAMPLE OF WHY METADATA IS USEFUL ON MY PHONE. I DON'T KNOW ABOUT YOU ALL. I HAVE TONS OF PHOTOS AND SO I ACTUALLY WENT AND SAID I WANT TO FIND AN OLD PHOTO OF ME FROM AN OLD RE:INVENT SO IT WASN'T THAT HARD. I ACTUALLY SEARCHED FOR LAS VEGAS. I SEARCHED FOR 2001 AND I QUICKLY FOUND THIS. NOW THIS IS A PICTURE OF ME AND DON MACASKILL, WHO'S SITTING RIGHT HERE IN THE FRONT ROW, CEO OF SMUGMUG. AND DON WAS ACTUALLY

OUR VERY FIRST S3 CUSTOMER BACK IN 2006. THANK YOU. DON. [APPLAUSE] NOW, HOW DID I QUICKLY FIND THIS PHOTO? I DON'T KNOW, MY PHONE JUST AUTOMATICALLY ADDED METADATA, RIGHT? IT ADDED THE LOCATION. IT ADDED THE DATES OF THE PHOTO WHEN IT WAS STORED, AND SO IT WAS EASY FOR ME TO SEARCH IT. YOU NEED A WAY TO FIND THIS DATA EASILY, BUT WHEN

YOU'RE DOING IT IN S3 TODAY, IT'S ACTUALLY REALLY HARD. YOU HAVE TO BUILD A METADATA SYSTEM WHERE YOU HAVE TO BUILD, FIRST OF ALL, ALL, A LIST OF ALL OF YOUR OBJECTS THAT ARE IN STORAGE, AND THEN YOU CREATE AND MANAGE THIS EVENT PROCESSING PIPELINE, BECAUSE YOU'RE GOING TO HAVE TO FIGURE OUT HOW YOU ADD METADATA AND ASSOCIATE IT WITH ALL YOUR S3 OBJECTS. RIGHT. AND SO YOU BASICALLY BUILD THESE EVENT PROCESSING PIPELINES THAT DO THIS. YOU STORE THE METADATA IN SOME SORT OF DATABASE THAT YOU CAN QUERY, AND THEN YOU DEVELOP CODE TO KEEP THESE THINGS IN SYNC. SO AS OBJECTS CHANGE OR ADDED OR DELETED, YOU

KEEP THE METADATA UP IN SYNC TO THAT. NOW AT SCALE, YOU CAN IMAGINE FIRST OF ALL, THIS IS UNDIFFERENTIATED HEAVY LIFTING, AND IT'S PRETTY TO IMPOSSIBLE MANAGE AT SCALE. NOW THERE'S A BETTER WAY. I'M EXCITED TO ANNOUNCE S3 METADATA. [APPLAUSE] S3 METADATA IS THE FASTEST AND EASIEST WAY FOR YOU TO INSTANTLY DISCOVER INFORMATION ABOUT YOUR S3 DATA, SO THIS JUST MAKES SENSE. WHAT WE DO IS WE TAKE WHEN YOU HAVE AN OBJECT, WE TAKE THE METADATA THAT'S ASSOCIATED WITH YOUR FILE, AND WE MAKE THIS EASILY QUERYABLE METADATA THAT UPDATES IN REAL, NEAR REAL TIME. HOW DOES IT WORK? WE TAKE ALL OF

YOUR OBJECT METADATA AND STORE IT IN ONE OF THESE NEW TABLE BUCKETS THAT WE TALKED ABOUT. SO WE AUTOMATICALLY STORE ALL OF YOUR OBJECT METADATA IN AN ICEBERG TABLE. AND THEN YOU CAN USE YOUR FAVORITE ANALYTICS TOOL TO EASILY INTERACT AND QUERY THAT DATA. SO YOU CAN QUICKLY LEARN MORE ABOUT YOUR OBJECTS AND FIND THE OBJECT YOU'RE LOOKING FOR. AND AS OBJECTS CHANGE, S3 AUTOMATICALLY ACTUALLY UPDATES THE METADATA FOR YOU IN MINUTES. SO IT'S ALWAYS UP TO DATE. WE THINK

CUSTOMERS ARE JUST GOING TO LOVE THIS CAPABILITY, AND IT'S REALLY A STEP CHANGE IN HOW YOU CAN USE YOUR S3 DATA. WE THINK THAT THIS MATERIALLY CHANGES HOW YOU CAN USE YOUR DATA FOR ANALYTICS, AS WELL AS REALLY LARGE AI MODELING USE CASES. SUPER EXCITED ABOUT THESE NEW S3 FEATURES NOW. FROM DAY ONE, WE'VE BEEN PUSHING THE BOUNDARIES OF WHAT'S POSSIBLE WITH CLOUD STORAGE. WE'VE HELPED MANY OF YOU GROW TO JUST UNPRECEDENTED SCALE. WE'VE HELPED YOU TO OPTIMIZE YOUR COST, AND WE'VE HELPED YOU TO GET UNMATCHED PERFORMANCE. AND NOW WE MAKE IT INCREDIBLY EASY

FOR YOU TO FIND THE DATA THAT YOU'RE LOOKING FOR. BUT I WILL TELL YOU, WE'RE NEVER DONE. OUR PROMISE TO YOU IS THAT WE WILL KEEP AUTOMATING WORK. WE'LL KEEP SIMPLIFYING ALL OF THESE COMPLEX PROCESSES THAT YOU HAVE, AND WE'LL KEEP REINVENTING STORAGE SO YOU ALL CAN FOCUS ON INNOVATING FOR YOUR CUSTOMERS. NOW LET'S HEAR FROM ANOTHER STARTUP CUSTOMER TO SEE HOW THEY'RE REINVENTING THEIR OWN INDUSTRY.

[MUSIC] IN 1995, THE WAY YOU EXISTED DIGITALLY WAS TO HAVE A WEBSITE. IN 2015, IT WAS TO HAVE A MOBILE APP YOU COULD INSTALL ON YOUR PHONE IN 2025. THE WAY YOU'RE IN EXIST DIGITALLY IS TO HAVE AN AI AGENT. WE MOVED FROM AN ERA OF

RULE BASED SOFTWARE TO AN ERA OF SOFTWARE BUILT ON GOALS AND GUARDRAILS. WE'RE IN THE AGE OF CONVERSATIONAL AI, AND THE WAY YOU EXIST DIGITALLY IS TO HAVE A CONVERSATION WITH YOUR CUSTOMER ANY TIME OF DAY 24 OVER SEVEN, AND THAT'S WHAT WE'VE BUILT WITH SIERRA ON AMAZON WEB SERVICES. [MUSIC] OLD SOFTWARE WAS BASED ON RULES. IF YOU THINK ABOUT A TYPICAL WORKFLOW AUTOMATION, IT LOOKED LIKE A DECISION TREE. YOU HAD TO ENUMERATE EVERY POSSIBILITY THAT YOUR CUSTOMERS COULD DO. WITH AI. IT'S DIFFERENT, AND WITH AGENT OS, YOU CAN MODEL YOUR

COMPANY'S GOALS AND GUARDRAILS TO BUILD ANY CUSTOMER EXPERIENCE THAT YOU CAN IMAGINE. WHAT DOES IT MEAN IF 90% OF YOUR CUSTOMER EXPERIENCE IS CONVERSATIONAL? [MUSIC] IT'S REMARKABLE HOW EASY IT IS TO START A COMPANY THANKS TO SERVICES LIKE AMAZON WEB SERVICES, WE CAN FOCUS ON WHERE WE WANT TO ADD VALUE AND REALLY RIDING THE COATTAILS OF THE INCREDIBLE INVESTMENT THAT AMAZON HAS MADE INTO THEIR CLOUD INFRASTRUCTURE. [MUSIC] I THINK THE COMPANIES THAT HAVE RUN SUCCESSFUL EXPERIMENTS TODAY WILL BE THE ONES THAT, FIVE YEARS FROM NOW, HAVE BUSINESS TRANSFORMATION DRIVEN BY AI. I THINK THE WAY TO DEAL WITH TECHNOLOGIES LIKE THAT IS TO DIVE IN AND LEARN AS QUICKLY AS POSSIBLE. [MUSIC] [APPLAUSE] VERY COOL. THANKS, BRETT. I THINK CUSTOMERS ARE REALLY GOING TO LOVE THAT. ALL RIGHT. I WANT

TO SHIFT OUR FOCUS TO ANOTHER IMPORTANT BUILDING BLOCK DATABASES. NOW, EARLY ON IN AWS, WE SAW AN OPPORTUNITY TO REALLY IMPROVE HOW DATABASES OPERATED. IT TURNS OUT DATABASES WERE SUPER COMPLICATED, AND THERE WAS A TON OF OVERHEAD IN MANAGING THEM. AND CUSTOMERS SPENT A LOT OF TIME DOING THINGS LIKE PATCHING AND MANAGING. AND WE KNEW THAT THERE WAS A LOT THAT WE COULD TAKE ON FOR THEM. SO WE SET OFF TO REMOVE THIS

HEAVY LIFTING. RIGHT. WE LAUNCHED RDS, THE FIRST FULLY MANAGED RELATIONAL DATABASE SERVICE. AND WHEN YOU TALK TO CUSTOMERS TODAY, THEY'LL TELL YOU THEY ARE NEVER GOING BACK TO AN UNMANAGED DATABASE SERVICE. THEY LOVE MANAGED DATABASES. NOW

WHEN WE FIRST LAUNCHED RDS, THE VAST MAJORITY OF APPLICATIONS OUT THERE IN THE WORLD WERE RUNNING ON RELATIONAL DATABASES, BUT IT TURNS OUT THE NATURE OF APPLICATIONS WAS EVOLVING. A BIT. AND WITH THE INTERNET APPLICATIONS STARTED TO HAVE MORE USERS RIGHT. THEY WERE INCREASINGLY DISTRIBUTED ALL

AROUND THE WORLD, AND CUSTOMERS HAVE STARTED TO HAVE VERY DIFFERENT EXPECTATIONS AROUND PERFORMANCE AND LATENCY. WE OURSELVES EXPERIENCED THIS AT AMAZON.COM IN OUR RETAIL SITE. SO BACK IN 2004, WE HAD A COUPLE ENGINEERS WHO REALIZED THAT OVER 70% OF OUR DATABASE OPERATIONS WERE JUST SIMPLE KEY VALUE TRANSACTIONS, MEANING WE'D RUN A SIMPLE SQL QUERY AND YOU'D GET THE PRIMARY WITH THE PRIMARY KEY, AND YOU'D GET A SINGLE VALUE BACK. AND WE ASKED OURSELVES WHY ARE WE USING A RELATIONAL DATABASE FOR THIS? IT SEEMS OVERWEIGHT AND THE THOUGHT OF THE TEAM AT THE TIME WAS MAYBE IT WOULD PERFORM LIKE WE COULD MAKE THIS FASTER. WE COULD MAKE IT CHEAPER, AND WE COULD

MAKE IT SCALE BETTER. IF WE COULD BUILD A PURPOSE BUILT DATABASE. SO THOSE ENGINEERS, TWO OF YOU WHOM YOU SEE UP HERE, ARE OUR VERY OWN SWAMI AND WERNER, WHO ARE BEGINNING KEYNOTES LATER THIS WEEK, WROTE WHAT IS NOW CALLED THE DYNAMO PAPER ABOUT A TECHNOLOGY THAT SPANS THAT REALLY SPAWNS THE NOSQL MOVEMENT. AND IT ALSO LED US TO DEVELOP DYNAMODB. DYNAMO

IS A SERVERLESS, NOSQL, FULLY MANAGED DATABASE THAT GIVES YOU SINGLE MILLISECOND LATENCY PERFORMANCE AT ANY SCALE, SCALES COMPLETELY UP AND ALL THE WAY DOWN. BUT DYNAMO IS JUST THE FIRST PURPOSE BUILT DATABASE THAT WE BUILT. WE GOT REALLY EXCITED ABOUT THAT, AND WE STARTED BUILDING LOTS OF PURPOSE BUILT DATABASES FROM GRAPH DATABASES TO TIME SERIES DATABASES TO DOCUMENT DATABASES. AND THE IDEA HERE WAS YOU ALL NEEDED THE BEST TOOL FOR THE BEST JOB. AND THAT'S WHAT THESE DATABASES ALL PROVIDED. NOW, THESE NOSQL DATABASES IN THIS WIDE SWATH OF PURPOSE BUILT DATABASES HAVE BEEN INCREDIBLY POPULAR. THEY HAVE ENABLED WORKLOADS THAT OTHERWISE JUST

WOULDN'T HAVE BEEN POSSIBLE. AND YOU ALL HAVE LOVED THEM. BUT IT TURNS OUT THAT SOMETIMES THE BEST DATABASE FOR THE JOB IS STILL RELATIONAL. AND SO RELATIONAL DIDN'T GO AWAY. IT'S STILL BY FAR THE BEST SOLUTION FOR MANY APPLICATIONS. SO WE KEPT INNOVATING THERE AS WELL. YOU ASKED US TO BUILD YOUR RELATIONAL DATABASE WITH THE RELIABILITY OF COMMERCIAL DATABASES OUT THERE, BUT WITH FRIENDLIER LICENSING TERMS AND THE PORTABILITY OF OPEN SOURCE. AND SO WE'RE ACTUALLY

CELEBRATING THE TEN YEAR ANNIVERSARY OF LAUNCHING AURORA AT RE:INVENT, CELEBRATING TEN YEARS. [APPLAUSE] OF AURORA IS, OF COURSE, FULLY MYSQL AND POSTGRES COMPATIBLE, AND IT DELIVERS 3 TO 5 X. THE PERFORMANCE THAT YOU GET FROM SELF-MANAGED OPEN SOURCE, ALL AT ONE TENTH THE COST OF COMMERCIAL DATABASES. IT'S NO SURPRISE, REALLY, THAT AURORA BECAME OUR FASTEST GROWING, MOST POPULAR SERVICE WITH HUNDREDS OF THOUSANDS OF CUSTOMERS. BUT WE DIDN'T STOP INNOVATING, OF COURSE. AND HERE'S JUST A SAMPLE

OF THE INNOVATIONS THAT WE'VE DELIVERED IN AURORA OVER THE YEARS. WE DELIVERED SERVERLESS SO YOU COULD GET RID OF MANAGING CAPACITY. WE DELIVERED I/O OPTIMIZED FOR AURORA TO GIVE YOU BETTER PRICE, PERFORMANCE, AND BETTER PREDICTABILITY OF PRICE. WE GAVE YOU LIMITLESS DATABASE, WHICH ALLOWED YOU TO HAVE COMPLETELY UNLIMITED HORIZONTAL SCALING OF YOUR DATABASES. AND WE'VE ADDED VECTOR CAPABILITIES IN AI INSIDE OF AURORA TO HELP WITH GEN AI USE CASES. AND THERE'S BEEN MANY OTHERS, AND WE

CONTINUE TO PUSH THE BOUNDARIES OF COST, PERFORMANCE AND EASE OF USE AND FUNCTIONALITY. SO THE TEAM TOOK A LOOK AT ALL OF THESE INNOVATIONS, AND THEY SAT DOWN WITH SOME OF OUR VERY BEST DATA BASED CUSTOMERS, AND THEY ASKED THEM WHAT WOULD A PERFECT DATABASE SOLUTION LOOK LIKE? LIKE IF YOU JUST TAKE AWAY THE CONSTRAINTS, WHAT WOULD A PERFECT DATABASE LOOK LIKE? AND THE CUSTOMERS TOLD US, LOOK, WE ASSUME YOU CAN'T GIVE US EVERYTHING, BUT IF YOU COULD, WE'D LIKE A DATABASE THAT HAD HIGH AVAILABILITY. THERE WAS, OF COURSE, RUNNING MULTI-REGION THAT OFFERED REALLY LOW LATENCY FOR READS AND WRITES, OFFERED STRONG CONSISTENCY, HAD, OF COURSE, ZERO OPERATIONAL BURDEN FOR THEM, AND OF COURSE HAD SQL SEMANTICS. NOW THAT IS A LOT OF ANDS. AND, YOU KNOW, A LOT OF PEOPLE WILL TELL YOU YOU CAN'T HAVE EVERYTHING. IN FACT, HOW OFTEN ARE YOU GIVEN THE CHOICE

WHEN YOU'RE TRYING TO BUILD SOMETHING OR YOU'RE TRYING TO SAY, DO YOU WANT A OR B? AND THE PROBLEM AND THIS IS AN INTERESTING WHEN YOU HAVE TO PICK A OR B, IT ACTUALLY KIND OF LIMITS YOUR THINKING. AND SO AT AMAZON THAT'S NOT HOW WE THINK ABOUT IT. IN FACT WE CALL THAT THE TYRANNY OF THE OR. IT CREATES THESE FALSE BOUNDARIES, RIGHT? YOU INSTANTLY START THINKING I HAVE TO DO A OR B, BUT WE PUSH TEAMS TO THINK ABOUT HOW YOU DO A AND B, AND IT REALLY STARTS TO HELP YOU THINK DIFFERENTLY. NOW LOOK, THERE ARE DATABASES OUT THERE THAT WILL GIVE YOU SOME OF THESE CAPABILITIES ALREADY. SOMETIMES

YOU CAN GET A DATABASE TODAY THAT'LL GIVE YOU LOW LATENCY AND HIGH AVAILABILITY, BUT YOU CAN'T GET STRONG CONSISTENCY OUT OF THOSE. NOW THERE'S OTHER DATABASE OFFERINGS THAT ARE GLOBAL AND HAVE STRONG CONSISTENCY AND HIGH AVAILABILITY IN ACROSS MULTIPLE REGIONS. BUT FOR THOSE THE LATENCY IS REALLY, REALLY HIGH. AND FORGET SQL COMPATIBILITY WITH THOSE. SO WE CHALLENGED OURSELVES TO GO SOLVE FOR THE END. AND IT TURNS OUT BECAUSE WE CONTROL THE END TO END

ENVIRONMENT FOR AURORA, RIGHT? WE CONTROL THE ENGINE. WE CONTROL THE INFRASTRUCTURE, WE CONTROL THE INSTANCES, EVERYTHING WE CAN CHANGE A LOT OF THINGS. AND SO ONE OF THE THINGS WE FIRST DID IS LOOK AT THE CORE DATABASE ENGINE, HOW IT WORKS IN COMBINATION WITH OUR GLOBAL FOOTPRINT, TO SEE IF THAT MIGHT HELP US DELIVER. AND SO THE FIRST BIG PROBLEM WE NEEDED TO TACKLE, IF WE WERE REALLY GOING TO DELIVER ALL THESE CAPABILITIES, IS HOW WOULD WE ACHIEVE MULTI-REGION, STRONG CONSISTENCY WHILE ALSO DELIVERING LOW LATENCY. THAT IS A REALLY HARD PROBLEM. SO YOU'VE GOT THESE APPS THAT ARE WRITING ACROSS REGIONS, RIGHT? AND WHEN YOU DO THAT, THE TRANSACTIONS NEED TO BE SEQUENCED IN THE RIGHT WAY SO THAT YOUR APP SO THAT ALL OF YOUR APPLICATIONS ARE GUARANTEED TO READ THE LATEST DATA RIGHT. BUT WHEN YOU DO THAT, YOU TYPICALLY TAKE A

LOCK ON THE DATA TO AVOID CONFLICTS SO THAT YOU CAN WRITE ALL BACK AND FORTH. NOW IT TURNS OUT YOU CAN ACTUALLY DO THIS TODAY WITH THE DATABASE ENGINES AND HOW THEY OPERATE. BUT IT'S INCREDIBLY SLOW AND ACTUALLY, I'LL TAKE A SECOND TO EXPLAIN WHY. SO A TYPICAL LET'S JUST SAY WE HAVE AN ACTIVE ACTIVE DATABASE SETUP ACROSS TWO REGIONS. AND WE WANT TO COMPLETE A TRANSACTION MUCH LIKE THIS ONE. THIS TRANSACTION HAS ABOUT

TEN STATEMENTS TO IT, WHICH IS, I THINK PRETTY AVERAGE FOR A DATABASE STATEMENT AND A TRADITIONAL DATABASE. WHAT WOULD HAPPEN IS YOU'RE IN A SINGLE REGION OR A SINGLE LOCATION, AND YOU'D DO TEN COMMITS BETWEEN THE APPLICATION AND THE DATABASE. AND IF YOU'RE ALL IN THE SAME LOCATION, THE LATENCY IS REALLY FAST. AND THIS JUST WORKS FINE. AND THAT'S HOW DATABASES ARE OPERATED AND HOW THEY'VE BUILT FROM THE BEGINNING. BUT NOW LET'S SAY YOU'RE DOING THAT ACROSS REGIONS. IT BECOMES

REALLY SLOW COMMUNICATION ACTUALLY HAS TO GO BACK AND FORTH TEN TIMES. RIGHT. AND SO BEFORE THAT CAN ACTUALLY COMMIT. SO IN THIS EXAMPLE LET'S SAY WE HAVE A DATABASE RUNNING IN VIRGINIA. AND ANOTHER ONE RUNNING IN TOKYO OKAY. THE ROUND TRIP IS ABOUT 158 MILLISECONDS BETWEEN VIRGINIA AND TOKYO. NOW IN THIS EXAMPLE, THAT DATA HAS GOT TO GO BACK AND FORTH TEN TIMES, RIGHT, TO COMMIT ALL EVERY SINGLE ONE OF THOSE BITS OF THE TRANSACTION. THAT'S 1.6 SECONDS. AND IF YOU ADD MORE REGIONS, IT ACTUALLY BECOMES EVEN SLOWER. SO THAT IS WAY TOO

SLOW FOR TODAY'S APPLICATIONS. FOR MOST USE CASES. NOW, GIVEN HOW DATABASES OPERATE, THIS IS A BIT OF A PHYSICS PROBLEM. UNFORTUNATELY, YOU'RE ALL GOING TO HAVE TO WAIT FOR US AT AWS TO SOLVE THE SPEED OF LIGHT UNTIL A FUTURE RE:INVENT. BUT TODAY WE ARE GOING TO LOOK AT FUNDAMENTALLY HOW YOU CHANGE HOW THIS DATABASE ENGINE WORKS. WE HAD THIS THOUGHT WHAT IF WE BUILT AN ARCHITECTURE THAT COULD ELIMINATE ALL THOSE DIFFERENT ROUND TRIPS? IF YOU DIDN'T HAVE TO DO THOSE, YOU COULD REDUCE THE LATENCY BY 90%. INSTEAD OF A 1.6 SECOND TRANSACTION, YOU COULD HAVE A 158 MILLISECOND TRANSACTION. SO WE DEVELOPED A

WHOLE NEW WAY TO PROCESS TRANSACTIONS. WE ACTUALLY SEPARATED THE TRANSACTION PROCESSING FROM THE STORAGE LAYER. SO YOU DON'T NEED EVERY SINGLE ONE OF THOSE STATEMENTS TO GO CHECK AT COMMIT TIME. YOU INSTEAD YOU DO THE SINGLE ON

COMMIT. WE PARALLELIZE ALL OF THE WRITES AT THE SAME TIME ACROSS ALL OF THE REGIONS, SO YOU CAN GET STRONG CONSISTENCY ACROSS REGIONS WITH SUPER FAST WRITES TO THE DATABASE. HOWEVER, THOSE OF YOU THAT ARE PAYING ATTENTION MIGHT HAVE NOTICED THIS INTRODUCES A SECOND MAJOR PROBLEM THAT AS YOU'RE WRITING ALL OF THOSE INDEPENDENTLY ACROSS VARIOUS REGIONS, HOW DO YOU GET ALL THOSE TRANSACTIONS TO COMMIT IN THE ORDER THAT THEY OCCURRED? BECAUSE IF THAT DOESN'T HAPPEN, YOU GET CORRUPTION AND BAD PROBLEMS HAPPEN. YOU HAVE TO MAKE SURE

ALL OF THOSE ARE ORDERED CORRECTLY. NOW, AGAIN, IN THEORY, THIS ARCHITECTURE WOULD WORK GREAT IF YOUR CLOCKS WERE PERFECTLY SYNCED BECAUSE IN A IN A TRADITIONAL DATABASE, YOU JUST SIMPLY LOOK AT THE TIMESTAMPS AND YOU CAN MAKE SURE THAT THOSE ARE ALL IN ORDERED. BUT AS YOU HAVE THESE DATABASES SPREAD AROUND THE WORLD, YOU HAVE TO DEAL WITH THIS PROBLEM THAT'S KNOWN AS CLOCK DRIFT. WHAT HAPPENS? AND MANY OF YOU ARE SURE AWARE OF THIS, IS YOU GET TIMES THAT ARE ALL SLIGHTLY OUT OF SYNC. AND SO IT'S ACTUALLY HARD TO KNOW IF THE TIME OVER

HERE IS THE SAME AS THE TIME OVER HERE. HAVING THOSE PERFECTLY SYNCED IS EASIER SAID THAN DONE, BUT FORTUNATELY WE CONTROL THE GLOBAL INFRASTRUCTURE ALL THE WAY DOWN TO THE COMPONENT LEVEL. AND SO WE ADDED THIS BUILDING BLOCK CALLED THE AMAZON TIME SYNC TO SERVICE EC2. WHAT WE DID IS WE

ADDED A HARDWARE REFERENCE CLOCK IN EVERY SINGLE EC2 INSTANCE ALL AROUND THE WORLD. AND THOSE HARDWARE REFERENCE CLOCKS SYNC WITH SATELLITE CONNECTED ATOMIC CLOCKS. SO THAT MEANS THAT EVERY EC2 INSTANCE NOW HAS MICROSECOND PRECISION, ACCURATE TIME THAT'S IN SYNC WITH ANY INSTANCE ANYWHERE IN THE WORLD. NOW THAT'S ABOUT AS TECH DIVE DEEP AS I'M GOING TO GO TODAY. AND WERNER IS GOING TO GO A LOT DEEPER IN HIS TALK ON THURSDAY.

SO IF YOU'RE INTERESTED, I ENCOURAGE YOU TO CHECK OUT. BUT THE NET IS THAT NOW THAT WE HAVE MICROSECOND PRECISION TIME AND THIS REDESIGNED TRANSACTION ENGINE, ALL THE PIECES ARE THERE FOR US TO DELIVER WHAT WE NEED TO AVOID THOSE OR TRADE OFFS AND DELIVER ON THE END. SO I AM REALLY EXCITED TO ANNOUNCE AMAZON AURORA D SQL. [APPLAUSE] THIS IS THE NEXT ERA OF AURORA AURORA DE SQL IS THE FASTEST DISTRIBUTED SQL DATABASE ANYWHERE AND DELIVERS THE NEXT GENERATION OF END AURORA DSQL DELIVERS VIRTUALLY UNLIMITED SCALE ALL ACROSS REGIONS, WITH ZERO INFRASTRUCTURE MANAGEMENT FOR YOU AND A FULLY SERVERLESS DESIGN THAT SCALES DOWN TO ZERO.

AURORA SQL DELIVERS FIVE NINES OF AVAILABILITY. IT'S STRONGLY CONSISTENT. YOU GET LOW LATENCY READS AND WRITES, AND AURORA SQL IS POSTGRES COMPATIBLE, SO IT'S REALLY EASY TO START USING TODAY. SO WE WANT TO SEE HOW THIS NEW OFFERING WOULD COMPARE AGAINST GOOGLE SPANNER, WHICH IS PROBABLY THE CLOSEST OFFERING OUT THERE TODAY. SO WE DID A MULTI-REGION SETUP AND WE BENCHMARKED COMMITTING THAT SAME, JUST THAT SAME TEN STATEMENT TRANSACTION THAT WE SAW EARLIER. AND IT TURNS OUT THAT AURORA DELIVERS FOUR X FASTER READS AND WRITES THAN SPANNER. PRETTY AWESOME. AND WE'RE REALLY EXCITED TO SEE HOW

YOU'RE GOING TO LEVERAGE THIS IN YOUR APPLICATIONS. [APPLAUSE] BUT ONE MORE THING. IT TURNS OUT THAT RELATIONAL DATABASES ARE NOT THE ONLY ONES THAT BENEFIT FROM MULTI-REGION, STRONGLY CONSISTENT, LOW LATENCY CAPABILITIES. SO I'M ALSO PLEASED TO ANNOUNCE THAT WE'RE ADDING THE SAME MULTI-REGION, STRONG CONSISTENCY TO DYNAMODB GLOBAL TABLES. [APPLAUSE] SO NOW WHETHER YOU'RE RUNNING SQL OR NOSQL, YOU GET THE BEST OF ALL WORLDS ACTIVE, ACTIVE, MULTI-REGION DATABASES WITH STRONG CONSISTENCY, LOW LATENCY, AND HIGH AVAILABILITY. THIS TYPE

OF CORE INNOVATION IN THESE FUNDAMENTAL BUILDING BLOCKS IS WHY SOME OF THE BIGGEST ENTERPRISES IN THE WORLD TRUST AWS WITH THEIR WORKLOADS. ONE OF THOSE COMPANIES IS JPMORGAN CHASE. IN 2020, WE HAD JPMC CIO LORI BEER ON STAGE TO TALK ABOUT HOW THEY WERE STARTING THEIR CLOUD MIGRATION TO AWS. NOW,

OVER THE PAST FOUR YEARS, THE TEAM AT JPMC HAVE BEEN DOING A TON OF WORK TO MODERNIZE THEIR INFRASTRUCTURE, AND I'M REALLY EXCITED TO WELCOME BACK LAURIE TO SHARE WHERE THEY ARE IN THEIR JOURNEY. PLEASE WELCOME LAURIE BEER. [MUSIC] GOOD MORNING. JPMORGAN CHASE IS A 225 YEAR OLD INSTITUTION THAT SERVES CUSTOMERS, CLIENTS, BUSINESSES AND GOVERNMENTS ACROSS THE GLOBE. OUR STATED PURPOSE IS TO MAKE DREAMS POSSIBLE FOR EVERYONE, EVERYWHERE, EVERY DAY. AND WE DO THIS AT TREMENDOUS SCALE. WE SERVE 82 MILLION CUSTOMERS IN THE US, FINANCING HOME OWNERSHIP, EDUCATION AND OTHER FAMILY MILESTONES. WE BANK MORE THAN

90% OF FORTUNE 500 COMPANIES AND EVERY DAY WE PROCESS. $10 90% OF FORTUNE 500 COMPANIES AND EVERY DAY WE PROCESS. $10 TRILLION OF PAYMENTS. ALL OF THIS IS WHY WE INVEST $17

BILLION IN TECHNOLOGY AND HAVE AN AMBITIOUS MODERNIZATION AGENDA TO DRIVE GROWTH. WE HAVE 44,000 SOFTWARE ENGINEERS WHO RUN MORE THAN 6000 APPLICATIONS AND MANAGE NEARLY AN EXABYTE OF DATA FROM MARKETS, CUSTOMERS, PRODUCTS, RISK AND COMPLIANCE, AND MORE. SOME OF YOU MAY REMEMBER I SPOKE AT RE:INVENT FOUR YEARS AGO AND IT'S GREAT TO BE BACK TO UPDATE YOU ON OUR PROGRESS AND WHILE THE INDUSTRY HAS EVOLVED DRAMATICALLY OVER THE PAST FOUR YEARS, THE CORE PRINCIPLES OF OUR CLOUD PROGRAM HAVE NOT. WE ARE STILL FOCUSED ON ESTABLISHING A STRONG SECURITY FOUNDATION THAT IS RESILIENT AND REPRESENTS OUR ROBUST REGULATORY FRAMEWORK. PRIORITIZING MODERNIZATION ACROSS BOTH THE BUSINESS AND TECHNOLOGY, ENABLING INNOVATIVE SERVICES LIKE AI AND SER

2024-12-08 00:42

Show Video

Other news