top of page
Writer's pictureKatherine Lim

Unveiling the Future: Exploring AWS Innovate - AI/ML Data Edition

AWS Innovate - AI/ML Edition conference


The recent AWS Innovate - AI/ML Data Edition conference was a great opportunity to find out about what AWS offers in the realm of artificial intelligence, machine learning and data management. In this virtual gathering we heard experts from AWS deep dive into the latest advancements, best practices and real-world applications of AI/ML and data technologies from Amazon Web Services (AWS).


AWS innovate online conference

The conference aimed to showcase the transformative power of AI and ML solutions on the AWS platform. With a focus on practical insights and hands-on learning experiences, professionals can gain invaluable knowledge to harness those technologies in their respective domains.


Keynote Highlights


The highlights of the keynote were the use cases. Amazon itself uses AI comprehensively. For example,

  • Amazon Rufus uses Generative AI for trying on clothes virtually;

  • Amazon Defective Product Detection uses a computer AI vision model trained on packages that look perfect in order to recognise defective or broken products.

Examples of customers of AWS AI included

  • Adobe for Generative AI - Adobe Firefly;

  • Runway AI for generating videos;

  • Canva for Text to Image functionality - you can request an image of a “panda on a surfboard” for example;

  • Autodesk uses AI for rendering structures to specifications, for example, reduce weight.


Hardware-wise, AWS provides the latest NVIDIA-H100 Tensor Core GPUs with their P5 instance selection. Many other NVIDIA hardware types are available in EC2. AWS also makes available their own custom silicon AI Accelerator - Trainium - under the Trn1 instance type which is purpose built for deep learning training of 100B+ parameter models.


Amazon Bedrock was presented as the main AWS offering for Generative AI. There was a broad choice of models from Anthropic Claude 2.1 to Meta Llama 2 to Amazon Titan which can be used for text summarization, chat and image generation - a use case was demonstrated for product design which produced a sketch to design rendering of multiple variations of an object, and using out-painting to place the object in the desired environment. Using Knowledge bases for Amazon Bedrock you can import text into a vector database and use one of the available LLMs (Large Language Models). Following on, Amazon Q is a Generative AI for question and answer, it can be used like a Chat GPT style chatbot, or like Copilot as an AI coding assistant in the Visual Studio Code editor. You can set up Amazon Q as your business expert by connecting it to supported platforms like Slack, Confluence, Dropbox, Zendesk, etc. and it indexes the information securely and privately from the other SaaS platforms, and can also perform tasks like create Jira tickets.


Key Takeaways and Insights


For Builders


Of interest to developers, the CodeWhisperer tool can generate code suggestions, unit tests and scan code for vulnerabilities. It’s supported in JetBrains, VS Code, Visual Studio 2022, AWS Cloud9, AWS Lambda console, Amazon SageMaker Studio and AWS Glue Studio. The AWS Toolkit plugin is the one to install in Visual Studio Code.


Application development cookbook: Recipes for building applications with ML and generative AI


Key takeaways from this talk:

  • There is no single optimised model for every task, you need to put together your own model with custom data.

  • Use an AI agent, which is an application powered by a LLM to perform specific tasks. For example, Streamlit helps generate websites in Python without needing to know CSS or React or HTML. LLMs have been trained on a lot of Streamlit code in order to generate Streamlit applications very well.

  • To develop API’s, use AWS API Gateway for traditional APIs and use AWS Appsync for GraphQL APIs. GraphQL has built-in type safety, is authenticated by default, and GraphQL APIs are designed to be human readable and hence Foundation Model readable.


Image: Generated image of a delicious bowl of laksa noodles


AI generated image of a delicious bowl of laksa noodles


Enrich and turbo charge your generative AI applications with visual workflow


This talk was a deep dive into Generative AI applications with visual workflow. “In the future we should be just writing business logic and not code” but in the present Step Functions is the serverless function solution for visual workflow. You can use the Amazon States Language or drag and drop in the graphical app studio, some parameters will need to be configured in JSON. To pass the output of one prompt to another prompt and so on, use prompt chaining. Parallel prompt chaining can also be accomplished by using the parallel state, e.g., use the Cohere model for the 1st branch, and the Claude model for the 2nd branch and then combine the results, for example, the use case of providing two options for a social media post. The tutorial Generative AI application presented a prompt to produce an image but the output limit of 256 kilobytes output is too small for image output so an output path to an S3 bucket was configured to store the generated image of a delicious nasi goreng.  External FM (Foundation Model) API’s like HuggingFace API can be called using HTTPS endpoint with Step Functions, and using an Event Bridge connection ARN to keep API secrets secure.


Image: Generated image of a delicious nasi goreng


AI Generated image of a delicious nasi goreng

Bringing the LLM closer to your SQL databases via Agents for Amazon Bedrock

Adding a relational database as context to your model can be challenging due to technology changes (difficulty integrating), sprawl (data is available everywhere) and flexibility (systems too strict). Using Agents for Amazon Bedrock the use case of “how much profit did we make last year?” can be solved. The Agent workflow consists of: select a Foundation Model, provide a prompt, select data sources (Knowledge Base), specify actions (Action Group). Agents works with Amazon Athena, RDS, and also supports Apache Iceberg and Parquet.


Implement MLOps practices with Amazon SageMaker

Implementing MLOps (Machine Learning Operations) practices involves integrating SageMaker into your existing MLOps workflow to automate and streamline the end-to-end process of developing, deploying and managing machine learning models. SageMaker Model Registry contains all your models and supports cross account sharing. Pipelines can be automatically generated by SageMaker to execute CI/CD steps like build model, store and execute pipeline. The pipeline can also include consecutive steps like manual approval, endpoint deployment, testing, manual approval, production deployment.


Conclusion

In essence, the AWS Innovate - AI/ML Data Edition conference serves as a catalyst for innovation and growth, empowering attendees with the knowledge, tools and resources needed to thrive in an increasingly data-driven world. By bringing together multiple tracks with sessions for builders and developers, the event sparks creativity, fuels curiosity and paves the way for a future where AI, ML and data continue to drive unprecedented value and impact. Innablr has been assisting customer adopt AI for enhancing their products and customer offerings. Come chat to us if you are looking to start your AI / ML journey.



bottom of page