Sign in to view Matt’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Matt’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Seattle, Washington, United States
Sign in to view Matt’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
400K followers
500+ connections
Sign in to view Matt’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Matt
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Matt
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Matt’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Activity
400K followers
-
Matt Garman shared thisQ1 showed strong momentum as customers continue to choose Amazon Web Services (AWS) to build and run AI. Some of what we saw, and other highlights, since our last earnings: ▪️AWS is growing at 28% YoY — our fastest growth in 15 quarters — on a very large base. ▪️We exceeded $20 billion annual revenue run rate for Amazon’s chips business — inclusive of Graviton, Trainium, and Nitro — which is growing triple-digit percentages YoY. ▪️Amazon Bedrock continues to scale rapidly — processing more tokens in Q1 than in all prior years combined and 170% growth in customer spend QoQ. ▪️We also expanded model choice, including adding OpenAI models and introducing Amazon Bedrock Managed Agents, powered by OpenAI. ▪️Agents are moving into real systems. With AgentCore, customers are deploying agents as frequently as every 10 seconds. ▪️Our application layer continues to grow. Customers are using Amazon solutions like Connect, Transform, and Quick to automate workflows. At our event yesterday, we launched the new Amazon Quick desktop app, which is already changing the way I work. ▪️We're seeing strong builder momentum. The number of developers using Kiro more than doubled QoQ, and enterprise customer usage is up nearly 10x. We're still early, but the pace of adoption continues to accelerate. Thank you to the AWS team and our customers and partners who make this all possible every day. https://lnkd.in/eaPR-u-5
-
Matt Garman shared thisBig day at #WhatsNextWithAWS. We’re giving customers both the applications and the building blocks to make agents real in production. You can see that across today’s announcements. We’re expanding our partnership with OpenAI: • OpenAI models now available through Bedrock • Codex on Bedrock for enterprise development • Amazon Bedrock Managed Agents, powered by OpenAI, to build and run agents at scale Amazon Connect is evolving into a set of agentic solutions that run core business workflows. And Amazon Quick is becoming an even more powerful AI assistant. Quick learns how you work, builds context over time, and takes action across your data and systems. I’ve been using Quick’s new desktop app over the past few weeks, and it’s the biggest AI productivity boost I’ve seen so far. I encourage you to check it out. These are just a few highlights. More on today’s news here: Amazon Quick: https://lnkd.in/eCyyM7ud Amazon Connect: https://lnkd.in/eGuRJUeP AWS-OpenAI partnership: https://lnkd.in/e9zK-Qtz
-
Matt Garman shared thisOne of the biggest shifts in AI right now is toward systems that run continuously. Meta is deploying tens of millions AWS Graviton cores to support that shift—one of the largest Graviton deployments in Amazon Web Services (AWS) history. These are production systems that reason, plan, and operate in real time at global scale. That changes the infrastructure requirements. Graviton was built for this kind of workload: sustained, efficient compute with low-latency communication between cores. This is what the next generation of AI infrastructure looks like. We’re proud to be working with Meta on it. Full announcement: https://lnkd.in/ex4QybcP
-
Matt Garman shared thisOver 100,000 customers are running Claude on Amazon Bedrock — accelerating drug discovery, transforming customer service, and reimagining how software gets built. Giving customers access to the best AI models, built and served on world-class infrastructure, is exactly what Amazon and Anthropic have been building toward together. Today, Anthropic is committing more than $100 billion to Amazon Web Services (AWS) over the next decade — securing up to 5 gigawatts of capacity across Trainium2, Trainium3, Trainium4, and future generations of our custom silicon to train and power Claude. And we're bringing Claude Platform directly into AWS so customers can access it through their existing account, no extra credentials or contracts needed. We love what we're building with Anthropic — and the customers we're building it for are just getting started. https://lnkd.in/eqURNc_W
-
Matt Garman shared thisClaude Opus 4.7 is now available in Amazon Bedrock. It’s Anthropic’s most capable Opus model, with strong performance across coding and complex, long-running tasks. It’s built for production with improved throughput, broader availability, and integrated privacy controls. This is the first Anthropic model running on Bedrock's next-generation inference engine, which was designed for zero operator access for even greater security guarantees, offers better availability with dynamic traffic routing, and provides improved scalability for your most critical workloads. More details here: https://lnkd.in/eKivD6xv
-
Matt Garman shared thisWe talk a lot about agentic AI changing how work gets done. Science is one of the most important places that shift can happen. Today we're introducing Amazon Bio Discovery: a new AWS application that provides scientists with advanced AI tools to help accelerate antibody drug discovery without needing to write code or manage complex infrastructure. Scientists have access to leading AI models for drug discovery and an agent that can help them design experiments. It also connects scientists directly to lab partners where they test and synthesize the top drug candidates, and results feed back into the application to improve the next round of design. In early work with Memorial Sloan Kettering Cancer Center, this reduced antibody design timelines from months to weeks. Making these tools accessible to more scientists, and not just to experts with AI or coding skills, is a meaningful step forward in the drug discovery journey. More here: https://lnkd.in/gMuWma6b
-
Matt Garman shared thisAs NASA's Artemis II crew returns to Earth after a historic journey around the Moon, I'm proud of the role that AWS played in supporting that mission. For the first time in more than 50 years, astronauts traveled beyond low Earth orbit. NASA's flight sciences team used Amazon EC2 in AWS GovCloud as a primary compute platform for trajectory analysis, the precision calculations that ensure the spacecraft stays on its exact path around the Moon and back, especially in the critical first 48 hours after launch. And the incredible 4K images and video from Orion were transmitted by NASA's Orion Artemis II Optical Communications System over the AWS global network. As NASA looks ahead to the planned lunar landing in 2028, we're honored to support what comes next. Congratulations to Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen. Welcome home! NASA - National Aeronautics and Space Administration
-
Matt Garman shared thisWe’re seeing the same pattern across customers building agents: teams are moving fast, but they don’t have a clear view of what already exists. Today we’re introducing Amazon Web Services (AWS) Agent Registry in Preview, available through Amazon Bedrock AgentCore, to solve that. It gives organizations a centralized way to discover, share, and govern agents across teams so builders can reuse what’s already been created instead of starting from scratch. Companies like Sony are using this to enable reuse of agent patterns across business units. Southwest Airlines is focused on preventing agent sprawl and establishing governance from day one. PepsiCo and Mitsubishi Electric are looking to give developers a single place to discover and trust what they build against. As agents grow from dozens to thousands, this becomes less of a tooling problem and more of a systems problem. Agent Registry is how we’re helping customers manage that shift. Learn more: https://lnkd.in/eCQfMYBx
-
Matt Garman shared thisWorth reading Andy Jassy's annual shareholder letter just out today. It really gets to how Amazon is betting big on what I think is one of the biggest technology inflections we've seen. AI is going to transform every single company, every single job, every single customer experience —and we're still at the early stages. The letter lays out how progress is rarely a straight line, but we feel convicted about the path we're on. That conviction is what drives how we're building for customers right now. Read the full letter: https://lnkd.in/e8UphhJq
-
Matt Garman liked thisMatt Garman liked this5pm. Uber. Exhausted. And suddenly, my presentation problem solved itself. I was stuck in traffic after a long day, dreading the executive briefing I had to prep for the next morning. Customer meeting. High stakes. No deck. So I opened Amazon Quick on my phone and asked: "Help me build a slide deck for a customer conversation about how Quick drives enterprise productivity." What happened next? Quick didn't just generate slides. It understood the context. It pulled data on what this customer cares about—their priorities, their initiatives. It mapped Quick's capabilities to their actual pain points: process automation, internal workflows, agent assistants. Then it built a vibrant, data-rich PowerPoint that looked like it took hours. It took 10 minutes in an Uber. This is what agentic AI looks like at scale. Not replacing your thinking—amplifying it. Not adding busywork—eliminating it. The presentation went great. But the real win? I got my evening back. What's one task that's been eating your time? Imagine if your AI actually understood your context and just... solved it. https://www.aws.com/quick/ #AmazonQuick #AgenticAI #WhatsNextWithAWS #AWS #Productivity #enterprise #awspartners Jigar Thakkar Rahul Pathak Jose Kunnackal Alicia Trent Kevin CarlsonNeal Cauley Natalie Hirsch Jonathan Preston Rich Geraffo Adebimpe Adelaja Justin Brindley-Koonce Leo Ohannesian Aditya Krishnan Arvind Muthukrishnan Michael Armentano ☁
-
Matt Garman liked thisMatt Garman liked thisJust tuned into today's "What's Next with AWS" livestream with Matt Garman Colleen Aubrey Julia White, and OpenAI leaders on the future of agentic AI. Leading AWS's global Startups & VC organization, I see every day how the most ambitious builders are rethinking what's possible with AI. But the transformation isn't just external — it's happening inside our own walls too. Amazon Quick has become the backbone of how my teams work. From synthesizing complex customer engagements in minutes to surfacing opportunity intelligence across hundreds of accounts — it's given us something every leader wants: more time with customers and less time producing work about the work. This is what the future of work looks like. Not a concept. Not a pilot. Production-grade AI changing how teams deliver value every single day. Highly recommend trying Amazon Quick desktop app today! https://lnkd.in/gR5w6wsx #AWS #WhatsNextWithAWS #AgenticAI #Startups #FutureOfWorkAWS launches Amazon Quick desktop AI assistant that works across your applications, tools, and dataAWS launches Amazon Quick desktop AI assistant that works across your applications, tools, and data
-
Matt Garman liked thisMatt Garman liked thisThe Quick desktop app is here, and it’s compelling. Connects to your email, calendar, Slack, local files, and several other apps to flag important communications, retrieve and summarize info, make recommendations, send communications, and create agents that do work you used to have to do yourself. Gets smarter and more personalized the more you use it. Been using it a lot recently and is changing how I work. It’s allowing me to use applications like my inbox more like an archive, and Quick as my personalized, prioritized, productivity hub that can multi-task various needs. Still early days, and a lot more coming, but excited for folks to start using it to make the undifferentiated work so much less complicated. https://lnkd.in/gwGTeFzm
-
Matt Garman liked thisMatt Garman liked thisToday was a big milestone as we announced an update to our partnership with Microsoft. Microsoft will remain our primary cloud partner, but our products and services will now be available across all clouds. Excited for what's to come in this next phase of our partnership to advance and scale AI for people and organizations around the world. https://lnkd.in/gyX_FGr4The next phase of the Microsoft OpenAI partnershipThe next phase of the Microsoft OpenAI partnership
-
Matt Garman liked thisMatt Garman liked thisAfter an incredible 20+ year journey alongside some of the most brilliant minds at Amazon Web Services, PayPal, eBay, The White House, Capital One, and UCSF, I am officially transitioning to focus on my art career full-time. I am deeply grateful for my time in technology, strategy, and operations. It is not easy to step away from such a dynamic environment, and I am so proud of the work I’ve accomplished and the caliber of people I’ve had the privilege to collaborate with. Some of you may know that over the last two years, my art has taken on a life and momentum of its own, and recently made history by being added to the permanent collection of the Arts Club of Washington, their first-ever acquisition by a living artist. It is rare to have a lifelong dream take flight like this, and I am excited to finally give it my full, undivided energy. Thank you to everyone who has supported, mentored, and partnered with me in the corporate space, and to those who have championed my creative journey. I am stepping into this next chapter with immense gratitude, and I am so excited to see where this continuous line takes me next! If you ever find yourself in Washington, D.C., please reach out, I would love to reconnect and welcome you to my art studios. You can also follow my latest work and upcoming events at www.kellydinglasan.com or at @kellydminton on Instagram.
-
Matt Garman liked thisMatt Garman liked thisAmazon added a new member to its senior leadership team Wednesday, naming AWS infrastructure chief Prasad Kalyanaraman to the group known as the S-team or “steam,” while also promoting cloud computing and AI services leader David Brown to senior vice president. Andy Jassy announced the changes internally, according to a memo viewed by GeekWire, and the company updated its public list of S-team members to reflect the changes. https://lnkd.in/gzD76mV2Amazon names AWS exec Prasad Kalyanaraman to S-team, promotes Dave Brown to SVPAmazon names AWS exec Prasad Kalyanaraman to S-team, promotes Dave Brown to SVP
Skills
- Amazon EC2
- Cloud Computing
- Amazon Web Services (AWS)
- Product Management
- Program Management
- SaaS
- Distributed Systems
- SOA
- Scalability
- Enterprise Software
- Big Data
- Software Development
- Go-to-market Strategy
- Start-ups
- Analytics
- Product Marketing
- Strategy
- Strategic Partnerships
- E-commerce
- Entrepreneurship
- Mobile Devices
- PaaS
Patents
-
Maintaining Latency Guarantees for Shared Resources
Issued US 8533103
This AWS patent relates to a technique for providing customers with capacity guarantees for latency for AWS resources (e.g., EC2 instances, EBS volumes, S3 buckets, etc.). According to the patent, a customer can set a minimum or maximum latency for a resource based on the time of day or day of the week. In response to receipt of a customer request, AWS can place the resource on a server with enough network throughputs to achieve the desired latency.
Other inventorsSee patent -
Data Set Capture Management with Forecasting
Issued US 8515910
This AWS storage patent relates to snapshot policies for different types of data stores, such as S3 and elastic block storage (EBS). Sequences of data set captures (sometimes called snapshots) may be scheduled between different types of data stores to achieve a variety of user and provider goals such as lowering data loss probability, lowering cost, and load leveling. As well as fixed schedules, data set capture schedules can include flexible scheduling windows and also be automatically…
This AWS storage patent relates to snapshot policies for different types of data stores, such as S3 and elastic block storage (EBS). Sequences of data set captures (sometimes called snapshots) may be scheduled between different types of data stores to achieve a variety of user and provider goals such as lowering data loss probability, lowering cost, and load leveling. As well as fixed schedules, data set capture schedules can include flexible scheduling windows and also be automatically scheduled to achieve a target capture frequency, a target probability of data loss, and a target cost. Capture retention lifetimes can be similarly scheduled.
Other inventorsSee patent
Recommendations received
View Matt’s full profile
-
See who you know in common
-
Get introduced
-
Contact Matt directly
Other similar profiles
Explore more posts
-
Pierre-Andre Liduena
1K followers
In April, we committed to using our $108 million Series C funding to accelerate our pace of AI innovation and deliver even greater value to customers. We are making good on that promise by introducing a new feature called Container Live Migration that enables DevOps teams to migrate live Kubernetes containers between nodes — including those running stateful workloads — with zero downtime. Container Live Migration is available now for all AWS EKS customers and will be available soon to Google GKE and Azure AKS customers. We have more exciting news planned between now and re:Invent. Stay tuned! https://lnkd.in/e85qssyV
77
1 Comment -
Reid Christian
CRV • 18K followers
As I'm leaving AWS re:Invent in Vegas, I’m reminded of a counterintuitive truth: competing with the public clouds is often really good business VCs love to ask, “Why wouldn’t the incumbent do this?” But in markets this large (and evolving this quickly) that’s rarely the right question. The real questions are: Who can assemble the best team, get to market fastest and with the most focus? And where are there opportunities to partner with the biggest players rather than compete? We’ve seen this pattern repeat across the ecosystem: -People assumed Vercel and AWS were destined to compete. Today, they’re strategic partners with CEO Guillermo Rauch in the keynote -Snowflake vs. Amazon Redshift. Now, AWS is one of Snowflake’s strongest distribution channels -In Datadog’s early pitches, the concern was, “If it’s just monitoring AWS, why not use CloudWatch?” Fast forward to now, and Datadog is now $50B+ public company We’re going to see the same dynamics play out with OpenAI and Anthropic. These are becoming foundational platforms. But that doesn’t preclude incredible companies from being built on, next to, and even around them. Some will look competitive. Many will end up as partners. And the biggest outcomes often start in those gray zones
217
26 Comments -
Marc Brooker
Amazon Web Services (AWS) • 19K followers
It's been a week since AgentCore Policy went GA. https://lnkd.in/g2NkEnzg Policy is the result of our ongoing investments in neurosymbolic AI, combining the best of LLM-powered techniques (e.g. combining common-sense natural language policy statements into formal logic) and symbolic reasoning (e.g. checking that tool calls match that policy logic deterministically). We're just getting started with our capabilities in this space, and are continuing to invest deeply in agent safety.
243
4 Comments -
Werner Vogels
Amazon.com • 87K followers
Two of AWS’ Sr. Principal Engineers, Nicholas Matsakis and Marc Bowes, take us inside the development of Aurora DSQL in a fascinating new blog. The post shows how the team scaled write operations without two-phase commit, overcame garbage collection hurdles, and ultimately embraced Rust for both data and control planes. What I find particularly interesting is how the team's journey with Rust evolved. They started cautiously, using it for a single component, and ended up rewriting almost everything in Rust. It's a reminder that sometimes the hard choices - like adopting a new language - can unlock tremendous value. The full post is well worth your time. It's not just about database internals, but also about team dynamics, a culture of learning (and questioning established approaches), and making difficult architectural decisions. Read it here: https://lnkd.in/eMhxBu9e Now, go build!
1,045
16 Comments -
Marc Brooker
Amazon Web Services (AWS) • 19K followers
Check out this new blog post from Werner Vogels, on the architecture and history of Aurora DSQL, and how we chose to build it in Rust. I like how this post touches on more than memory safety, including predictable performance, reduced debugging efforts, and team productivity.
200
2 Comments -
Josh Jarrett
7K followers
“Ride shotgun on smart people’s sustain learning process.” That’s the advice I give people for keeping up with AI developments. Otherwise the atomized data points and hot cakes are overwhelming. Fortunately three of those smart people - Steve Newman, Rachel Weinberg , and Taren Stinebrickner-Kauffman - have launched the Golden Gate Institute for AI to help.
11
2 Comments -
Jayshree Mallaya
Third Vision AI • 2K followers
For a while now, I’ve been writing and building in parallel. Some of you have followed my journey as I shaped Third Vision AI not as a consultancy but as something more foundational. Today, I published a piece on Substack that explains what we actually build. As AI systems move from generating content to triggering transactions and activating real-world systems, governance can’t remain a policy document sitting on a shelf. It has to live inside the architecture. Third Vision AI is about embedding authority directly into execution ensuring autonomous systems act within verified boundaries at the moment of consequence. This article is my attempt to explain that shift clearly and simply. If you’re curious about where AI governance is heading, especially in finance, logistics, and sovereign infrastructure. I’d love for you to read it. From Ground to Orbit. Here it is: https://lnkd.in/dtzFAGfj #ThirdVisionAI #ExecutionAuthority #SovereignAI #AIInfrastructure #AgenticAI
9
11 Comments -
Zack Kanter
Stedi • 6K followers
We shipped the Stedi Agent – from first design doc to public announcement – in 13 days. At AWS re:Invent, I got to sit down with Amazon Web Services (AWS) Director of Technology, Olawale Oladehin, to talk about how we built the agent and the design philosophy behind it. One key principle I shared: Quality doesn't mean slowing down. Instead, we ruthlessly cut scope to maintain high quality at speed. That focus lets us quickly ship simple – but powerful – features that our customers care about. Thanks to Amazon Web Services (AWS) for hosting and the invite. Link to the full recording below.
168
8 Comments -
Christian Ricci
Indigenous Software • 2K followers
As I come to the end of my time at AWS re:Invent 2025, one theme keeps coming up for me: how much Amazon Web Services (AWS), the AI/ML communities, and the other hyperscalers have changed the economics of experimentation. Between the foundation models, the emerging agentic patterns, and the broader AI canon of tools including NLP, machine learning, retrieval-augmented generation, and more, we now have cost- and resource-efficient ways to play with our data. This lets us hyper-personalize, model behavior, and decompose business processes into orchestrated steps that can be handled by smart systems instead of being throttled by shoulder taps and tribal knowledge. That shift is creating a wave of new tools and companies, no doubt. But there is also an immediate application for our team: matching customers to their needs more directly. Customers want answers. The “10 blue links” (another re:Invent topic, courtesy of an interview with Perplexity's Aravind Srinivas by the Acquired team) was always a hack, a workaround that pushed the burden of getting to an answer back onto the user. AI search systems are flipping that model by giving people what they wanted instead of sending them on a treasure hunt. As marketers, we have the same opportunity. Let’s use this new toolkit to match customers with what they need. The art and the engineering should aspire to get customers what they want at their moment of need, and to provide that offer in context, with no more and no less fluff than they require to stay a customer now and for the long run. Also worth a read. Check out what RVLVR friend, Justin Kuss had to say about re:Invent.
13
2 Comments -
Sasha Kipervarg
Stanford University Graduate… • 5K followers
Cloud cost uncertainty isn’t a fluke; it’s by design. Cloud providers profit from complexity: 💸 Granular billing 💸 Shifting discount programs 💸 Opaque pricing for AI workloads 💸 Exploding SaaS and shadow IT The result? Engineering and FinOps leaders are left scrambling to explain unpredictable cloud bills while innovation can’t slow down. Traditional dashboards and static budgets won’t save you. You need adaptive cloud cost management that embeds cost awareness into engineering workflows, detects anomalies early, and drives shared accountability. We break it all down in our latest post, including 5 steps to start leading through the chaos. Read it here: https://lnkd.in/ge4jadxr #FinOps #CloudCost #EngineeringLeadership #CloudComputing #AI #SaaS #Ternary
35
1 Comment -
Kurt Glore
Amazon Web Services (AWS) • 2K followers
Amazon Kiro removes the undifferentiated heavy lifing and lets Software Engineers focus on writing code. I've worked with thousands of Software Engineers during my career. They like to write code! What most do not like doing is writing test cases, documentation, code reviews, creating demos, setting up logging and monitoring, error handling, etc. I've been working on a multi-agent Crewai workflow for a real-world use case that will save time and improve executive engagement in my day job. I recently imported my project into Amazon Kiro executed a code review in around 120 seconds and provided me with an initial score as well as what I should improve as high, medium and low priority. Amazon Kiro made several suggestions for me. One of the cool things was having Amazon Kiro write documentation for the project. Here is the documentation that was created: 📚 Complete Documentation Suite 1. Main README - Project Overview Comprehensive project introduction with features and architecture Quick start guide and installation instructions Usage examples and configuration options Development guidelines and contribution information 2. API_REFERENCE - Technical Documentation Complete API documentation for all functions and classes Environment variables reference CLI commands documentation Integration examples and error handling 3. DEPLOYMENT_GUIDE - Production Deployment Multiple deployment options (local, Docker, cloud, Kubernetes) Environment configuration for different stages Security configuration and monitoring setup Performance optimization and troubleshooting 4. ARCHITECTURE - System Design Detailed system architecture with diagrams Component descriptions and design patterns Data flow architecture and scalability considerations Extension points and quality attributes 5. TROUBLESHOOTING - Problem Resolution Common issues and solutions Debugging techniques and tools Performance troubleshooting Advanced debugging methods 6. CONTRIBUTING - Developer Guide Development environment setup Code style guidelines and standards Testing requirements and examples Pull request process and community guidelines 7. EXAMPLES - Usage Examples Basic and advanced usage examples Programming integration examples Docker and testing examples Monitoring and metrics collection 8. FAQ - Frequently Asked Questions General questions about the system Technical setup and configuration Usage and troubleshooting Integration and scaling questions 9. CHANGELOG - Version History Detailed version history and release notes Breaking changes and migration guides Performance improvements and bug fixes Future roadmap and planned features #AmazonKiro #AIDLC #SoftwareEngineering #AI4Developers #DeveloperProductivity #MultiAgentAI #DeveloperTools #AWS
22
-
Young Bang
Two Six Technologies • 20K followers
As AI continues to reshape how agencies build, deploy and secure software, transparency and deeper risk understanding are becoming essential to getting it right. I recently joined Sonatype Senior Vice President of Product, @Tyler Warden, and Lockheed Martin Vice President and Chief Digital AI Officer, Mike Baylor, for a GovCIO Media & Research #GovFocus virtual panel on the realities of open-source AI — where it accelerates innovation, where it introduces new exposure and what federal teams should be thinking about as they modernize their software supply chains. If you're navigating AI adoption or looking to strengthen your security posture, this discussion offers a grounded look at the challenges and opportunities ahead. Watch “How AI is Reshaping Open Source, Software Supply Chain” here: https://lnkd.in/es9xv9Wi #SupplyChain #FederalIT #GovTech #AI #ArtificialIntelligence
109
1 Comment -
Brian Gero
3K followers
Most teams are planning one to three years out for AI infrastructure. But with limited capacity and rising demand, that may not be enough. If your plans don’t account for infrastructure constraints, you’re already behind. Flexential's 2025 State of AI Infrastructure Report shows how leaders are adjusting. Learn more: https://ow.ly/Fa0f30sMEnB #FlexAnywhere #AI #CapacityPlanning #Infrastructure #DataCenters #AIInfrastructure
20
-
Elizabeth Gorzney, PMP, ITIL
Ambiflo • 1K followers
🚀 Modernization is no longer about cloud migrations or system upgrades. The real transformation is happening in AI workflows, where design, testing, deployment, and monitoring are being reimagined with intelligence built in. I’ve shared some thoughts on why AI isn’t just a tool, it’s becoming the workflow itself. 📖 Read my full article here, alongside Amazon’s perspective on how generative AI is reshaping developer workflows: Scaling and Modernizing with AI Workflows Modernization is no longer just about cloud migrations or system upgrades. Today, it’s about embedding AI directly into the way we build, deliver, and scale software. Amazon recently shared how it is transforming developer workflows with generative AI. What resonated with me wasn’t just the efficiency gains — it was the broader lesson: AI is not just another tool. It’s becoming the workflow itself. From Tools to Workflows The real shift happens when AI is woven into the end-to-end process: 1. Design & coding with predictive and generative support. 2. Testing & monitoring with automated detection of errors, drift, and anomalies. 3. Deployment & operations with built-in governance and feedback loops. Treat AI as an add-on, and you’ll see incremental benefits. Treat it as the workflow, and you unlock scale, adaptability, and resilience. What Scaling Really Requires From leading AI-driven initiatives, one truth stands out: scaling is about trust and governance, not just technology. 1. Policy-as-code and governance embedded in pipelines. 2. Continuous monitoring for bias, drift, and security risks. 3. Human-in-the-loop oversight to keep AI outcomes reliable and responsible. Without these foundations, AI can create fragility. With them, it becomes the backbone of modernization. The Bigger Picture The Amazon story is just one example of what’s possible. The bigger shift is clear: AI workflows aren’t side projects anymore, they’re the new operating model. Curious how others are embedding AI into development workflows; what’s worked, what hasn’t? Would love to compare notes. 👉 For those interested in a deeper dive: How generative AI is transforming developer workflows at Amazon https://lnkd.in/deygVBDq
6
1 Comment -
James J. Dimmer III
CHALLENGER • 21K followers
HCF is a reality, but supply and lead times pose a constraint. For those scaling HCF metro/DCI, support is available for production-ready connectivity, including jumpers, pigtails, and assemblies, along with rapid-turn builds. CHALLENGER -We can help Amazon Web Services (AWS) Microsoft Microsoft Azure Google
1
-
Tyler Postle
Voker (YC S24) • 6K followers
🚀 Built to Ship – Hosted by Tyler Postle, Co-Founder of Voker (YC S24) 🎙️ 🎧 Featuring Aizaz Manzar, Chief Agentic AI Product Officer for Sales and Marketing at Amazon Web Services (AWS) In today's episode, I learn why AI isn't failing us—we're just building on shaky foundations. Aizaz shares how 16 years in business roles at Dell taught him that insights without context are meaningless. When he joined AWS 7 years ago, he started building the data infrastructure that would later become the foundation for their agentic AI systems. The problem: Most companies are frustrated with AI because they're using simple chatbot implementations without proper data structures, knowledge graphs, or business context. They expect magic from tech teams alone, but this is a tech + data + business problem. Aizaz's solution at AWS: Build hybrid teams where data engineers understand business, product managers can code, and everyone knows 80% of the business domain. His team owns the full vertical stack—from backend data infrastructure to business context layers to agentic AI products. Key insight: "A good knowledge graph isn't differentiated by technology—it's the business knowledge infused into it that matters."
19
2 Comments -
Annie Pearl
Microsoft AI • 9K followers
Today we announced that Maia 200 is now online in Azure. Maia 200 reflects a year-long focus on making AI inference more efficient and easier to deploy at scale. It delivers 30% better performance per dollar than our latest-generation fleet, with strong FP4/FP8 throughput optimized for large-scale AI workloads. In addition to performance and deployability, it also helps us make production-ready AI at hyperscale more sustainable by lowering energy per workload. Pretty exciting.
20
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content