AWS re:Invent 2024 - Better together: Protecting data through culture and technology (SEC302)

AWS re:Invent 2024 - Better together: Protecting data through culture and technology (SEC302)

Show Video

hi good morning so we're going to talk a lot about a lot of core competencies about what it means to protect data and the different mechanisms that we have across AWS to achieve that including some ones that you may not be familiar with and because you're not familiar with them and your colleagues aren't familiar with them and your leadership might not be familiar with them there is an opportunity to do more that's kind of what the sole of this talk is about is to give you an idea of the range of things that are possible on AWS but also to help share with you what we have known in our own practice and what we hear from customers is that the most important part about a security program is your people and not just your people but the set of norms and expected behaviors that you can have of one another that they ought to have for themselves so my name is Peter odonnell I'm a principal Solutions architect I've been here for nine and a half years and I've had the privilege of working with some of our largest and most challenging customers including some exciting ones that were on stage this morning and what we know is that this idea of culture can be hard to understand what is culture and I submit to you that this is why culture that the purpose of culture is to eliminate renegotiation and when if you first hear that it sounds a little funny but what it means is is that there's some standards that there's expected norms and behaviors that you're done talking about I mean you're still going to have to talk about them to propagate them but they're no longer in dispute and if you go to the subway here in the west you'll go down the right hand side of the stairs and if you visit Tokyo and you see their extraordinary subway system you'll go down the left-and side of the stairs because that's simply what is done there it's no longer a matter for dispute or Readjustment that it's this idea of a set of behaviors and Norms that you must and expect stick with and so at AWS and we also encourage you to cultivate in this in your own house is that security is literally everybody's responsibility and yes we do have a sophisticated organization called aw security but security begins with engineering it begins with product definition it begins in your support operations I'm old enough to remember doing tech support in my younger your days support might have an unusual level of access that security is everyone's responsibility and that you must build it into all layers of the stack and opportunity to say what could go wrong and what am I going to do about it and we call this Behavior rigor that you must do the work that you must seek out these opportunities these technical capabilities that we're going to talk about today and apply rigor understand them match them to your controls objectives and then set them up properly and then observe them over time so that you know things are still working if you've ever had a de debate about uh uh preventative controls versus detective controls well the bad news is you still have to build the uh detective controls to understand that the preventative ones are working and as you step through what we're going to get into here today I want you to understand that good intentions do not work but mechanisms do and by building more and more mechanisms to implement controls and configurations to have them right and safe and stay that way over time that you can get really high quality outcomes but I said it begins with people in Amazon we have this program called the security Guardians program and what this really means is how do we scale out appc what we otherwise call Proactive security but how do you scale absc how do you get everybody to become a security person well you've I'm sure you've heard of train the trainer I've heard you I'm sure you've heard of you know reaching your power users first so they can kind of cultivate it well security works the same way and this is how we've done it and not everything we do is the right move for a customer and you can make your own decisions of course but what we hear from customers who've adopted this model and have attempted to build it is that it works really really well that you can deputize folks and get security throughout your organization most especially in your engineering and product organizations I have had too many conversations with cesos who tell me they're struggling and they're describing a problem that really belongs to engineering that really belongs to product that those folks and the chief executive all have to believe that security is the top priority and if you get this going if you build some inertia the expected benefits are Titanic they're really really great and not only do they good benefits in the context of like oh okay this is more secure but it's also good for your uh stakeholders across the company and maybe outside the company if you have sarban oxy problems if you have board of directors and publicly traded company problems if you have regulatory challenges if you've got any kind of supervisory external stakeholder that they want the same things as your security people and that your compliance people also want those same things and so we encourage you to build security not only into your products and into your Technologies but build security into your development teams and you can move towards a place of continuous compliance and of compliance engineering ing well if you think about what an srre needs she needs great observability really good logs repeatability infrastructure is code all that good stuff well as it turns out your compliance colleagues want the exact same thing your Auditors want the exact same thing and in the same way that I believe you should bring security into your Builder teams bring your audit people into your security teams and also into your engineering teams cuz truly it's not it should not be an adversarial relationship and I get that it often is I've lived on both sides of that but if you can get them in early just like the security people get them in early you can have more productive and literally healthier psychologically safer nicer to go home at the end of at the end of the day of conversations with your colleagues CU truly you all want the same things and so as you think about compliance as you think about your engineering teams bring them them in early we give you a lot of technical capabilities and possibilities to achieve some very critical compliance goals and whether you're an engineering manager or security engineer or software engineer or a compliance person which I think probably describes about 80% of you these are critical objectives but again you can obtain them through the same ways that you're doing all of the rest of your engineering and security work so let's talk about some of the different dimensions of protecting your data if culture is the foundation what else remains and I'm going to move through these quickly there's a lot here and there's a deep dive available on effectively all of these topics in separate talks so we are going to move quickly but these are kind of the classical dimensions and of course we make available a lot of great products and services to help you reach these goals we also know that there's a long journey for protecting your data that yes you need to protect the data in the place it ends up but again if security is everyone's responsibility that also means it's literally everybody's uh responsibility across different roles of infrastructure your Ed has a that your network team has a role to play the people that administer the messaging buses and the Q systems have a role to play because there is a long life cycle here and you need to be protecting data at every stage so one of the classical overlooked ways to really get incredible value for ensuring certain things about your data that can only be accessed by your people from within your infrastructure using your credentials and therefore not accessed by your credentials outside your infrastructure not access by somebody else's credentials outside your infrastructure not access by somebody else's credentials from within your infrastructure the bad guys is going to bring their own creds to your house and so we encourage you to reason about this idea of what's called the data perimeter and a couple of key controls to affect this and again this is a story of cultural rigor that you can't do one thing you have to do a bunch of things and that way if one of them fails through malicious intent or accident that you're not relying on one thing we've called this defense in depth for a very long time when it was quite literally perimeter defenses but this is also defense in depth by reasoning about different areas to apply controls and get high quality outcomes so when we think about reasoning about your identities reasoning about the resources that they connect to reasoning about the networks that they're on we give you really strong capabilities across the entire portfolio and again there are great deep Dives on all of this we recently announced a new type of policy control called resource control policies that is another way of adding belts and suspenders to data protection outcomes so this idea of the data perimeter can get really high quality outcomes that you can apply policy to your resources to ensure that they're only accessed in ways that make sense and what you see here is that it must be coming from a credential that belongs to me AWS principal org ID that is to say a credential issued by my own AWS Universe this prevents the adversary or a too eager developer from bringing in their home credentials and attempting to use them within your infrastructure same idea on the network we're going to allow using an endpoint here only if it belongs to me only if these creds belong to me here's another example on the resource okay you can put a message into this queue but only if it's actually my queue belts and suspenders all the way around and this is here's another example right so we want to stop somebody in the coffee shop from accessing your bucket directly as three buckets are locked down by default of course but your teams might be able to mutate some of those policies in ways that you don't find acceptable oh data perimeters again deep Dives available search online for that phrase incredible content Available to You encryption another classical way of protecting your data at rest if you get your crypto from us we're on it and we have really high quality crypto and a world-class team that is building those tools and we provide both data at rest and data in motion for all of the AWS services that store and process data we've recently completed some big Milestones regarding data in motion we now support TLS 1.3 everywhere TLS 1.3 is really important TLS 1.3 is not only better and safer is also faster for first encrypted bite it is way faster for first encrypted bite if the server and the client have already met that's a big deal drains tail latency out of your remote use cases so we encourage you to adopt tls1.3 for a couple of different reasons this idea of encryption on AWS you can do client side encryption we provide open source sdks we have a really great open source SDK for client side encryption with Dynamo that gives you searchable encryption which used to be an insane fantasy and apply encryption at lower level infrastructure that you don't see or even smell but it's there anyway all of the networks that connect AWS controlled facilities are themselves encrypted at a lower level cross region VPC pering implements its own encryption so if you put a TLS connection over cross region VPC ping there's literally like four different layers of encryption for that stack and so we give you some really great tools KMS is the right place to get that KMS you can do a lot with policy control and get very clever that is an advanced use case but definitely be using it definitely be using it AWS I am of course is where credentials and login comes from but you can now enter into I am using an x509 certificate so if you already have a lot of cryptographic identity in your world pki you can use that pki infrastructure to assume roles of course we have certificate products and again rigor in your certificate products figure out the right pattern for this and I'm old enough to have taken a couple of uh pages in the middle of the night from expiring TLS certificates well if you use ACM integrated with elb you'll literally never have that problem again cuz the system just does it for you really really powerful stuff but we know that there is a shadow looming for contemporary cryptography and that shadow is this idea of the possibility of a sufficiently powerful quantum computer coming into existence and cryptography is predicated on a set of assumptions and with asymmetric cryptography one of those assumptions is that it's super hard to factor a prime number if a powerful quantum a computer comes into existence it will no longer be super hard to factor primes this breaks an underlying assumption so there is kind of a an interesting long-term threat for certain customers that there's a this idea of an adversary harvesting ciphered traffic today keeping it in the freezer for gosh only knows how long 10 20 30 100 years no one's quite sure but if they get these new capabilities they may be able to attack the stored cryptography so this means that we have to deal with the problem today because it takes a long time to build new standards it takes even longer to get those standards like ratified and fully vetted because if you create a new standard that is survivable for a postquantum future you don't want to be vulnerable to classical attacks either but the good news is the industry and Amazon itself have been working on this for a very long time and we now have the selected uh crypto modules that will be used to support this idea of a postquantum future and Amazon will be migrating to those over the next five years and if you get your crypto from us that is to say elb cloudfront TLS things like that we will be ready before you need it if your core product involves crypto in other words not just TLS on your website but if there's something cryptographic about what it is you do maybe you're signing firmware maybe you're signing firmware for assets that have a long lifetime then this is something you need to start worrying about but generally if you get your crypto from us we're on it and uh we'll be ready before it's necessary we've got a really great blog post coming out this week so remember to search online and that blog post has a complete view into where we are and where we're going for PQ uh we support some interesting PQ stuff today so if you're moving files around using the transfer family which is basically managed SFTP and managed uh SCP you can get PQ algorithms there today there's a shared responsibility model here I said we be ready before you need it and that's on us but you need to build the muscles of replacing your software and updating your software this is a muscle you need anyway this is a capability you need anyway you need software inventory you need the ability to rapidly release changes and updates to it and you're going to need it again in the future in the next 5 10 15 years to prepare yourself for PQ the good news is there's a reason to do some of that work today and that's tls1.3 PQ algorithms and support for PQ postquantum is what I mean every time is not going to come the tls1.2 so there is already a good reason

to go do that work to get on to 1.3 and prepare your and again build those muscles if you don't have them to ensure that you know how to inventory your systems and how to you know rationally and at scale and with high levels of assurance replace some of that software because you will have to replace the AWS sdks and the clii and you'll have to update your browsers and everything else that speaks TLS that you want to start using PQ the client will need to be updated we're on it on our side but just because our load balancers support a PQ algo doesn't mean you're ready to take advantage of it but when you are ready to take advantage of it and your clients begin what's called advertising to the server the way TLS works the client kind of says I'm willing to do the following things and the server says so am I if your client says it prefers PQ we will immediately support it and again that's our over Horizon over the next several years let's talk about observability observability observability fits into a lot of other the of the Dynamics that we're talking about I mentioned earlier this idea that you still need detective controls if you build preventative ones and you need to know that things that you expect to work particularly the things that map directly to security outcomes or compliance outcomes you need to know that they're still happening and you can do that with AWS Services whether that's Cloud watch Being able to ingest your logs cloudwatch being able to ingest your metrics and synthetically create new metrics is one of the best features in the in the product that is to say you can observe how many times a certain event happens how many times another event happens turn those into numbers and then observe a ratio between those events event 1 123 is only really bad if it's more than 20% of event 345 Cloud watch lets you do that and that has enormous security outcomes right it's bad to see failed SSH attempts it's even worse if it's the majority of the connections and you can build really high quality observability using some of the AWS Services cloud trail of course is the journal you need to be using your cloud trail yes you need it for after theact forensics if something bad happens but you can be using cloudt Trail as a source of information and signal to improve your operations to understand your development and deployment patterns and observe them over time and you can use that rich data to again improve your operational excellence and improved operational excellence of course leads directly to high quality security outcomes so one of The Inspirations for this talk originally was a story I heard a long time ago from a customer and they were very proud that they had gotten taxpayer ID social security number out of their call center application so a year later they wanted to do some natural language processing on comments that the uh agent might put in very cool a year later they start using one of our data detection products Macy which could help scan your S3 resources and the data to see like oh are there credit cards in this bucket are there Social Security numbers in this bucket and this customer was shocked beyond words to discover that there were social security numbers in their unstructured data pool from the call center how did that happen well when they took the field out of the little you know form that the call center agent's using they thought they had solved the problem but all that had really happened is that the social security numbers started going in the Commons field and they were unfortunately in a position where they had polluted a downstream data source or Downstream data uh Repository or Warehouse whatever with ssns even though they had no idea should have never been there so I mentioned earlier this idea of really doing the work in understanding where you can apply rigor and security at every step in every layer so in a modern distributed system there's a lot of messaging buses where one component is calling another if you're using AWS one of the best ways to do that is through SN s the simple notification service it's this ubiquitous messaging bus you can build software that puts messages in you build software that gets messages out it's awesome it does exactly what it says on the tin an SNS itself has the ability to detect pii now you may have other places that you're using to detect pii but this is another example of this cultural obsessive rigor that even in the messaging bus there's an opportunity to uh police and detect unwanted data in those streams and it's not only supported in SNS it's also in cloudwatch why is it in cloudwatch well there's this idea called a log spill and log spells are scary and terrible because again know I need to protect my customer database sure obviously but as it turns out there are other places in your operating environment that might become polluted as it were with data that shouldn't be there at all and if you've got a developer that turns on the wrong debug flag for a web application all of a sudden you've got query strings in your logs those query strings might have customer confidential or company confidential information in them so as you bring data into something like cloudwatch logs cloudwatch logs itself has the ability to detect things that should not be there it's a really really powerful pattern and another example of the rigor that is required to do the work to get the high quality outcome so this is what these kind of look like this idea of a data protection policy and again this is available both in SNS and cloudwatch logs two services that might not be on your radar when it comes to protecting data but as it turns out it's a really important part of ensuring a high quality and compliant and assured and audit proof way of managing your environment by applying rigor and deep ownership across everybody that the people on the log ingestion system might not think that protecting customer data is their job cuz why would it be right it's just a log system but as it turns out again and again security must be everyone's responsibility so we have some managed identifiers for this so you don't have to invent red jx's until you grow old and retire uh we've got buil-in capabilities into these detective uh features of SNS and cloudwatch logs where we will automatically kind of know these things and provide them to you as features for no additional charge as part part of the core product and again there's really good benefits here right that this scales well because it's part of the fabric of the messaging service itself and that you get really good visibility into it and this again it answers two questions not only making sure things are working correctly but also being able to attest to your boss to her boss to the audit boss whomever that look how do you know there's nothing in that well three ways number one up at the front front end of that we made sure it was never going in number two in the flows between them we made sure it was never there and number three in the back we were scanning that data as well belt and suspenders multiple controls so that you can still get the high quality outcome even if one of the controls and one of the Technologies fails again either through malicious intent God forbid or just through a misconfiguration so I want to bring up to the stage a dear colleague of mine talk about everybody's favorite topic these days generative AI Rah and I have been working together for a long time and I leave you in very good hands all right there you go co um thank you Peter uh hey everyone and uh I'm Ram ramani I'm a principal Security Solutions architect at AWS uh been here about 8 years so we're going to be continuing the bigger conversation and talk about protecting data within your generative AI applications um so when you are trying to build applications powered by generative AI you have multiple personas participating you have applied scientists you have data scientists um developers applied scientists are building models data scientists are probably prepping data or preparing data sets and you also have developers it's you know we Peter talked about the distributed security ownership model and it's important that all of these personas are collaborating and putting on their security hat as well as collaborating with security humans to make sure that they bring the rigor required to build a secure uh application powered by geni so just show fans how many of you are actually building geni applications within your organization few of few great all right so we will kind of use these terms during the presentation just just because many of these terms have different definitions so I just want to make sure that I laid out there in the beginning so we talk about tokens these are usually words or part of words when I say mask or sanitize I'm referring to replacing data with an identifier that does not have any relation with the original data and when we say guard rails we're talking about methods that provide safeguards to prevent undis desirable behaviors and when I say rag I'm referring to using organization organization data to improve quality of model inference all right so we're going to start with three assertions and I want to take you through this section at the end of it we will define whether these assertions are true or false or debatable the first one being I should use guard rails when building an application powered by generative AI the second one I need to consider authorization for data used by generative AI processes and the third one being I need to sanitize Data before it enters generative AI processes and we'll look at each of them uh as we go along I'm sorry I think so some key definitions when you're building a journey way application the risk here is that the possibility of an adverse event with consequences on any dimension of responsible Ai and for security it is an exposure to a threat that can compromise the confidentiality integrity and availability of an MLA aai system and from a privacy perspective you're looking at the exposure or mishandling of sensitive or personal data in this context of an interaction with an MLA AI system so you probably have access controls for your data data within your organization and you may already have such granular classification as shown on the screen but you may decide that some of these data types may not enter the generative AI processes as you build your application that is powered by generative AI these as you can see traditionally you know many organizations have high level classifications but with generative AI touching code and many other data types all of these classifications become very relevant expanding on data types we find that a large percentage of data within organizations are unstructured but when customers are building uh applications powered by generative AI they're using public and private structured and unstructured data for their applications now building gener application brings new challenges and these are broader challenges related to responsible AI such such as undesirable and irrelevant topics being surfaced toxic language that can harm or uh have risk for your brand organization sensitive data exposure or being exposed or uh being out there uh with an adversary and then bias or stereotype propagation in this section again we're going to be focusing on data protection or sensitive data exposure so let's take a look deeper into the data risk with generative AI so when you're building an application powered by generative AI you have prompts you have unlabeled data for continued pre-training for your models you have label data for fine-tuning your models and then you have structured or unstructured data for rag or by agents and then maybe you're building your own model with your own training data set all of these are entering your models it could be either an amaz model that's powered by Amazon Bedrock it could be a sage maker model that you build or third party model and a response goes back to the user so you need to think about the associated data risk for any of the data that is listed here now sensitive data can be part of prompts that are coming in as well as in responses and you should think about sensitive data uh exposure risk for both internal applications that your employee may be accessing as well as external customers that may be accessing an application that you're offering over the public internet let's talk about model fine tuning so when we have labeled data that is entering a fine tuning process and eventually operating on one of the model types leading to a model uh which is fine-tuned which has increased accuracy for specific tasks so when you're fine-tuning a model with a small number of labeled data this data needs to have proper access control so that it's not poisoned as well as you want to think about lease privilege the only the fine-tuning process has access to the label data and nothing else right now when we talk about rag or retrieval augmented generation a user is you know in this context you know interacting with a chatboard application and the rag process is using data within your organization to eventually generate a context which gets appended to a system prompt and a question that is sent to the model which then generates a response back to the user and even in this uh when you think about brag you need to think about hey I have this data which is internal to my organization accessed by this retrieval process should I sanitize it or do I need to think about access control for this data and we'll look at these in more detail later so let's break down this conversation into a few use cases that we will talk about and study the risks and mitigations the risk here is that the model that you're using in your application powered by generative AI May reveal sensitive data about your company and what protections have you built to safeguard against this risk now let's reason about this risk with an analogy to driving cars you want to drive fast and have the best user experience or Driving Experience while you have a guard rail that protects you in case something goes wrong and that's why we built the Amazon Bedrock guard rails and this and and this a it's a method for you to implement safeguards tailor to your application requirements and responsible AI needs it could be about preventing hallucinations could be about sensitive data exposure could be about denying specific topics that you that is harmful or you think it's not appropriate for your customer base and typically all of these Dimensions that are represented here in the slide are evaluated in parallel um so and and we but we are focusing here on the dimension of sensitive data exposure when I say that this is all evaluated in parallel the reason being that it is absolutely critical that any of your users that are using your application powered by gener AI they have absolutely as low a latency as possible and that is why all of these dimensions are evaluated in parallel when you are building an application powered by generative AI now with respect to bedrock guardrails they can work with models supported by Amazon Bedrock or on a Model that you have built using a using Sage maker or on a third- party model these are indep dependent guard rails right and there and these guard rails invocation do not need model invocation you can apply guardrails without model invocation and the guardrail invocation itself can be within a Lambda function or a container application or any other stack that you have built it's after all it is a restful API that you're calling to apply guard rails for the filters that I showed you so let's look at another use case the risk we're talking about here is your sensitive organization data is used by the rag process or the retrieval augmented generation process the question you need to ask are do I need to sanitize the data that enters the rack process second What mechanisms have you have I built in the organiz in my Organization for the retrieval process so that only authorized documents are retrieved for the requesting individual this kind of gets into least privilege and let's take a look at how this works so you have a user Alice asking a question of a chat ball application and the retrieval process in this case is identity aware so it's able to check whether Alice is authorized to access organization data that you're using for the rag process and then if she is there's a context that is generated which gets appended to the system prompt and the question which then gets sent to one of the models and if not there is no context generated and only the system prompt and the question gets sent back to the model and then the response goes back to Alice but before it goes back to Alice it's filtered by the guardrail configuration that you set up on a similar ve if your retrieval process is not identity aware you can orchestrate identity awareness using agents so you'd be able to take in this case Alice is you again interacting with the chatboard application and an AI agent is checking if Alice is authorized only then you know gives the green signal to the retrieval process which does not have identity awareness to then go and generate the context based on similarity search and based on the question that is asked and eventually leading to a response that's again filtered by the guard rail before it goes back to Alice so let's look at let's let's dig a little deeper and look at fine grain authorization what we spoke so far is like C grain authorization from from an identity perspective but let's look at fine grain authorization you can see here that a name is replaced with a placeholder similarly for email in the case of credit card what I'm doing is I'm replacing a few credit card digits with star star star which is again these are all non-reversible but some of you may want to have fine grain authorization and reversibility or reidentification of specific data for an individual or a group of individuals in this example I'm replacing a few credit card digits with a encrypted blog encrypted with an AWS c key and a key reference for reversibility so let's take a look at this works in the context of uh you know generative AI so we talked about reversible masking and in this case Alice is again interacting with the chatboard application with is with a retrieval process that is identity aware and and that retrieval process is checking if Alice is authorized if yes it generates the context which is appended to the system prompt and the question but then there's another process which is masking it and generates a masked context and the masked context is basically using KMS to mask some of the credit card data that I showed you on the previous slide and then in the response from the model there's a Guard the guard rail is applied for the responsib AI policies that you may have but then it before it goes back to Alice because in this case the example is that Alice is authorized to look at the full credit card number in plain text the part of the credit card number that was masked is reversed before it sends sent back to Alice the way this works with KMS is that the KMS has got something called key policies and what this is showing you is that if Alice is a principal within the key policy that with and she has decrypt permissions we are able to unmask the data or the part of the credit card number for Alice before it's sent back to her so this is kind of giving you a you know method to build fine grain authorization within applications that are powered by generative AI now this is not new to generative AI however in the context of tokens in and tokens out these mechanisms are desirable and i' we have seen it we have seen that many Healthcare and financial organizations are asking is about financial like fine grain authorization that they can build within their application that are powered by generative AI all right so the third use case I want to talk about is your organization sensitive data may be used for fine-tuning your model the questions that you're asking is is it necessary Neary to sanitize the fine tuning data if protections such as guard rails are applied in Downstream use of the fine tune model and also do you need to evaluate the fine-tune model accuracy after sanitization kind of this is debatable and the debate is mostly around centralized versus decentralized governance models and I want to kind of bring back what Peter said earlier about rigor and collaboration and distributed security ownership if you don't know that the downstream users of your fine-tune model are using protections such as guardrails then you're better off sanitizing the data but if you don't sanitize then you will need to have stronger governance of Downstream protections such as guardrails we recommend however that you should use guardrails you know regardless of whatever design you're pursuing but there are use cases where you you may want to follow a very tight data minimization strategy by sanitizing Data before it enters the fine-tuning process now let's say that you decide to sanitize you will have to measure the effectiveness of the finetune model for the for your user experience and such that and and you want to check whether sanitizing increases accuracy and provides a better user experience or not for your application model evaluation is critical and having a pipeline for model evaluation that works very well is absolutely needed with even with guardrails and any other protections that you may apply and as you can see these processes such as fine-tuning are you know are probably pursued by applied scientists you need to they need to think about identity based authorization and sanitization which are all related to security so the distributed ownership model ownership model as well as the rigor and security is absolutely critical as you have your organizations or or your or your teams pursuing building a application powered by generative AI so let's look at one last use case so the risk I want to talk about here is sensitive data may still be revealed with all the protections that I talked about so essentially you're looking at protection such as guard rails identity based authorization and even sanitization masking reversibility with all of that is it possible that your application still can reveal sensitive organization data and I want you to bring your attention back to rigor again because you need to have a rigorous red teing process with employees fuzzing with prompt mutations within your model pipelines to look at possible vulnerabilities or jailbreak that jailbreaks that can lead to sensitive data exposure so again the culture of security and distribut ownership can really help here in going a long way to build a secure application that is powered by generative Ai and improve the user experience for your customers all right so let's get back to the assertions I started uh this section with first one was I should use guard rails when building an application powered by generative AI the answer to that is true I need to consider authorization for data used by generative AI processes again true and I third one I need to sanitize Data before it enters gener AI processes that's kind of debatable because of the governance models that you may choose to pursue so cool so that's all we had for you and I want to kind of summarized by talking about the key takeaways here so Peter talked about the importance of culture and distributor ownership for better data protection we went then talked about protecting your data parameter with all the tooling and mechanisms that are available with policies please do consider implementing them they are available today for you to use you want to start preparing for the postquantum world we have made it easy for you to exp experiment with some of our libraries such as s2n and AWS LC as well as with some of for services that you can actually initiate a TS handshake with a postquantum key agreement uh try it out and see if if this that's something you want to pursue within your organization and also start thinking about inventory observability and lock protection is necessary again take advantage of all the tools that are available in AWS and lastly think about the risks and tradeoffs when you're building applications that are powered by generative AI so that's all thank you so much uh and um lastly um our LinkedIn profiles for me Ram ramani as well as Peter Donald is here we are available outside the room for if you have any further questions and you want to come and talk to us and happy to take any questions and help you uh thank you so much yeah

2024-12-14 23:19

Show Video

Other news

The Future of Video Games: 10 Shocking Predictions 2025-01-19 19:28
Reviving the SunRay: Sun Microsystems’ Forgotten Terminal Server & Thin Client 2025-01-13 16:58
20 Military Technologies That Will Change The World 2025-01-12 11:14