IT Masters Panel - Will AI Take My Job
hello everybody and welcome to this webinar hosted by itmasters and Charleston University as part of the digital Innovation Festival 2023 uh thank you all for joining us uh good morning good afternoon good evening etc for those of us joining from other time zones and locations my name is Jack Stewart I will be your host and moderator for today uh usually uh those of you who are regular attendees of the it Masters CSU short courses May recognize my disembodied voice from previous iterations of itm uh presentations we are joined by a number of wonderful panelists for today uh really quickly uh start with a little bit of housekeeping uh if you've got uh comments that you would like to make to the general attendees as well as panelists and hosts please put those in the chat section on Zoom you can set your uh two section down the bottom to everybody to present uh sorry to prevent that going just to panelists in attendees and if you've got specific questions that you would like to be entered in our q a section towards the end of the webinar uh you can direct those into the Q a section uh but as always we're not going to be able to answer everybody's questions and we have a number of uh pre-organized questions to get through as well so before we begin the content of the webinar I would like to acknowledge that we are coming to you today from the uh lens of the war injury and one run people of the cooler Nation as well as the beer pie people um and we acknowledge their updated and ongoing sovereignty and pay respects to their elders and ancestors and their connections continue to lands Waters and culture so welcome once again to this panel uh on AI and the future of work uh it Masters and CSU are presenting as part of the digital Innovation Festival uh so it Masters provides high-level industry training in conjunction with the postgraduate it degrees at Chelsea University such as cloud computing and virtualization cyber security networking assistance Administration as well as a number of others we will be providing a little bit of information and links to our courses and short courses which are free and available online those links will be popped in the chat throughout the webinars duration uh so the reason that we are here today uh doing this particular panel rather than our usual short course webinar format is basically that in our recent short course that we conducted on practical AI for non-coders hosted by uh Luca who is one of our wonderful panelists for today uh there were simply too many Fantastic skins uh and opportunities for discussion for us to possibly fit in in the time that was allocated so we are having a uh more free-flowing panel discussion rather than the one-way delivery of information that is usual for our short course web and ask we are also joined by a kit and Shane from it Masters and I thank them very much for organizing this panel for today as well as making sure that everything runs smoothly on at the back end so thank you kit and Shane uh so our panelists for today uh we are joined for firstly by the wonderful Dr Anwar al-shark he's a senior lecturer and Deputy leader in the Machine Vision and digital health research faculty at Charles Stoke University he has a PHD in AI from Monash University as well as a professional certificate in machine learning and AI from MIT he has received multiple academic research and teaching Awards published more than 70 peer-reviewed papers in reputed AI journals and at conferences and is interested in artificial creativity deep learning data analytics and computer vision hello Dr Anwar hi hi everyone thank you very much thanks very much for joining us Anwar lovely to have you our next panelist is Rashad kazui uh who is the general manager of data at Australia Post has been building AI solutions for Australian Enterprises for nearly a decade uh developing deep learning computer vision solutions for manufacturing inspection laboratory microscopy Medical Imaging and Supermarket retail with his startup engineered intelligence and most recently he's been building AI capability for leading Australian Enterprises developing AI strategy building teams and capability to deliver sustainable AI Solutions at scale currently he is leading Australia posts Enterprise data function and focusing on delivering business data transformation enabling the next generation of digital capabilities for Australia posts team members and customers thank you very much rashan hey great to be here really excited for the conversation thanks very much we are as well our next panelist is Athen Mullane who is the founder of sadana labs and a Pioneer in machine learning agencies in Brisbane and Melbourne uh Athen focuses on building performance of tolerant machine learning systems for industries that span Life Sciences biotech ecology and finance uh he's written a thesis in representation learning for cancer genomics uh collaborating with the Queensland Institute of Medical Research and Google genomics AI focuses on neural representations to support applications requiring Customs facial temporal and topological relationships and this is what led to the formation of Sedona Labs welcome Athen hi all pleasure to be here looking forward to it thanks very much Athen uh and finally we have uh Luca ewington pizzas who is an NLP engineer at Davidson uh so Luke is a self-taught programmer who began his uh Tech Career uh by beginning several uh award-winning startups in machine learning competitions uh he works as a freelance machine learning consultant uh it's currently seconded to Cutting Edge NLP proof of concept at Davidson on an ongoing basis he also helps run a very active the most active Ai and ml Society in Oceania as well as having a YouTube channel in which he simplifies complex AI topics for a wide audience and those of you who were participants in the recent AI for non-coders short course May recognize him as our wonderful Mentor for that welcome Luke hey hey guys hi everybody uh and unfortunately our other panelists who are scheduled for today Dr Jason Howarth is unfortunately no longer able to attend our panel but we are definitely Keen to have him involved in a future installment so thank you very much to all of our wonderful panelists uh we are recording this panel and it will be uploaded to our YouTube channel uh within 24 hours uh along with all of our existing webinars short courses and other content um so we have a number of questions for the panel to discuss which uh We've pre-ordained uh in response to some of the questions and themes of the questions that we received in the recent uh in our recent short course but we are going to start off with a bit of a question uh of one of those and then feel free to add in your questions to the Q a section so once again General chat and observations can be directed to the chat and specific questions around the content to the Q a and we will choose a few of those questions based on time towards the end to get through so our first question is for Athen and we would like to know how Athen and then our other panelists view the potential as well as the limitations of generative AI as well as how Industries can navigate these challenges to harness it to the best of their abilities as well as the capacity of the technology yeah I think it's a great question um thanks Jack um so we've obviously witnessed in the last 12 to 18 months a pretty crazy explosion from my side of things uh coming from I guess a research interest and then deploying these systems in industry for more than five years this felt like a relatively obscure field but I think I know at least in my LinkedIn feed a lot of you guys are probably on LinkedIn it's just everything is dominated by ideas around generative AI um chat GPT prior to that gpt3 even gpt2 there was some hype growing but it was like I was hearing from I think I don't know once it's kind of like Bitcoin once you start hearing from your parents or your grandparents you know that you know we've we've really uh yeah uh go on go on mainstream um I think there are a lot of challenges and limitations still to go um I think that a lot of money and resources have been poured into this um open AIS sitting on ridiculous like crazy valuations um there's everyone's just racing to um you know pull money into this and it's the new boom in Silicon Valley but I think there are a number of things that are risky um as a generative technique uh I would claim that any generative system can't really properly be verified because if you're asking it to do something creative you don't know what it's going to come up with and so there are certain applications where that works great um where you might want to simulate um the expression and the creativity of you know the wonderful human mind but um there are a number of other applications where having that com that unbounded output is really just it's not a situation you want in a mission critical system and we we've seen this with self-driving cars in the last you know that this has been promised uh it's three years out Elon Musk has been promising it's three years out for the last six years whatever and there's still this long tail this these edge cases because you don't know what you're going to see I'm actually more in favor of something that underpins generative AI which is called Foundation models um this was another big innovation in the last two years was taking um these classical ml models for image for language for representing information in our world visual information um speech information perceptual information in a way that computers can think about it and and work with it but I'm actually in favor of discriminated discriminative AI as opposed to generative AI I think generative AI is has great value in the techniques that we've seen but I think for deployment in industry for understanding the multitudes of complex data I think that's a job that humans actually aren't that great at and my interest is in healthcare I've built a few different solutions in healthcare and one of the examples is you might want to have your entire medical record you might have your um your radiographic information you might have biomarkers you might have DNA information all of that pertains to you but it also pertains to how you're different to the popular relation and there's no individual who can comprehend all of that stuff and make a useful decision or discriminative thing for you and this is doctors spend 20 years developing that wisdom but I would claim that this is a perfect use case for AI systems and obviously that's another mission critical use case where we don't want to get it wrong but it's a situation where um flagging someone early for a cancer we've seen this there's a few startups in Australia that have had goes at this and it's um we're actually seeing lives being saved flagging prostate cancer early things like that so that's where I would like to see more effort but I don't know how we're going in terms of whether that's five minutes but that's some thoughts fantastic I know that Russian has a lot of feelings about the capabilities and limitations of generative AI do you have any response or anything to add to that ah gosh um you know I do experience the same thing it's the one thing everybody wants to speak about which also similarly took me by surprise I think there's so much value in uh these traditional machine learning models and methods that we have that we still haven't exploited ai's really just kept at the hype whereas I actually think similarly there's so much value here in these other kind of solutions and tools I do think though that gen AI isn't necessarily going to go away and I do think that some of the supporting underlying tools within NLP actually are great and do have a lot of value and there are some advances happening there so when I think about language interfaces and sentiment analysis and these other kinds of tools they've existed for a while I see those advances um being meaningful and I think that we might see a lot of gen AI that's actually just sentiment analysis and a bit more of this kind of traditional stuff under the hood right uh did any other panelists have anything to add on them and why I think you're muted currently okay sorry about that yeah can you hear me now yeah okay so I'm very excited to talk about it uh as an AI researcher because before generative AI we were purely working on artificial intelligence which were more about making decisions and intelligent things but when with the with the generative system we entered into additional capability which is called artificial official creativity so the creativity is something which was always we we always wanted to see okay the humans are really good in Creative creativity and machine won't be able to create new things so before the generative AI uh pretty much we were focused on intelligent side decision making expert systems and that kind of thing what with the help of generative AI now we think the machine can also generate something cool they can also generate something out of nothing for example if you look at the cool idea of Gans and diffusion systems so they start with noise they start with random data and then create something uh yes we can give them like we train them but uh but the outcome which is coming up is really cool and and nice at the moment I think uh what we can see it's only single model or sometime uh very limited multi-modal for example it use text it use images at the moment but I think when it will become purely multi-modal just like human creativity we have different sensors and all sensors are working together to help us create something Innovative uh so I think then it will become really cool and I think that's where the advancement is going in terms of limitation I would say as the human kept creativity has no limitations so I would I I'm pretty optimistic like the the machine creativity and the generative Aid AI will also have no limitation and new things will start coming in next few days this yeah yeah thanks anwa uh that is I think a really good opportunity to move on to our next question uh which is about ethical guidelines for integrating AI so I'd like to direct this actually back to you Anwar uh what sort of ethical guidelines do you think there should be for companies integrating AI uh things like quotas for human inclusion or is there anything else in particular that stands out to you um thank you very much um as you know like when as a human whenever the technology comes uh most of the time the technology is for the benefits of of amenity are similar to this we have developed a new technology called Ai and we also see uh like in it should be working for for the benefits of humanity and that's why here comes the question of ethics uh and uh we are compared to other Technologies we are a little bit more scared about AI because all the technology we have developed in the past they were not intelligent so they even trust uh those capabilities which we are really proud as a human so we we human are really proud like we are intelligent compared to every other creature on so that's why it's the first time the technology has actually uh tried to challenge that capability which we have which is called intelligence so so that's why we are a little bit scared about what's going to happen if these machines will actually get the the all the traits and attributes which we have as a human so that's why it's a bit scary picture so that's why we we're strongly thinking about ethics side uh and I think that's one aspect which should be natural but on the other hand side uh whether we should think about the guidelines uh yes because uh when technology will be used it will be used for our benefits within the society so that's why we need to have some Norms some guidelines uh to use it okay so many of the societies around the world after uh first of all they we all were thinking the AI is just a fiction but because of the advancement in last 10 years now we can see it's no more a fiction it's actually in practice okay so now we can see the real use of AI and that's why now everyone is thinking about AI um and bringing responsible AI ethical AI ethics of AI guideline principles risk like transparency privacy by Design these are some of the the themes which are emerging but these all emerge fields are really important uh many countries are thinking separately but they're also collaborating around it for example Australia is one of those countries which came up the very first in 2009 we we came up with a AI ethical uh framework which has like eight principle about ethics how to use Ai and integrate them into but it's a voluntary it's not something like every industry need to implement but it's just a suggestion it's the best way to to implement it and recently uh the the Human Rights Commission Australia uh last month they have come up with a recommendation that we should have ai commissioner uh like who should be and give a recommendation to the government like the the ethics should be part of the legislation process uh like how how that should be implemented in our system okay uh sorry I'm just gonna uh ref that for time at the moment it is a super super uh interesting point and I am really Keen to hear what any of our other panelists have to say about this um Luca I feel like you potentially have some uh burning points are you just Wiggly particular topic I feel like everything has been covered better than I can cover it so okay misinterpreted your body language there any other panelists Roshan yeah I I the way that I think about it might be a little bit different to um how Anwar understands it um for me I think the scope and scale of impact for the technology has also as a driver for us to consider uh what is appropriate when it comes to its application and the ethical application of that technology and also the ethical considerations of that technology and I think you know all Technologies are just tools for human use and so part of what I hear when people talk about how are we using AI they're not really talking about AI but what they're expressing is potential fear about humanity and how we as humans may choose to apply this technology it's up to us to determine if we want to apply this technology for good or for supposed bad and I think when people Express concerns what they're saying is they don't necessarily feel like they have the tools or the capability to engage and influence the way in which this technology is being applied or can be applied Within Society I think it's going to be a tool for tremendous good but even if it is a tool for tremendous good think about a cure for cancer if that's controlled by only a few limited people and the benefits of that tremendous good are not shared with everybody then you know I'll be pissed off I'll be concerned about how this technology is being used and why it isn't being shipped so I think there's those those factors that are also important and I I really want to encourage and engage people to get involved in this and develop the tools to be able to influence it to make sure that our choice to use and live with leverage this technology is done in a way that benefits all of us yeah thank you for that Roshan and also uh just on one of your points that you made around the scope and the scale of AI Solutions I'm really interested to hear about how those challenges can evolve and some of those challenges and the answer to those challenges involve when you're transitioning from AI prototypes and the kind of you know uh earlier stages that we've been kind of seeing to delivering real world AI Solutions at scale and what your thoughts are around the best practices that we can apply to ensure that scaling is successful sure um you broke up a bit there so hopefully my internet connection is okay just let me know if it fails uh that is a huge topic right I think people people have written books on this and it really but I think if we think in simplified terms when we're talking about prototyping a capability what we're talking about is demonstrate some technical capability as a once-off activity were then really clear defined boundaries and constraints and limitations but we're taking it from there to delivering a solution that has an impact in the real world we're talking about a system that operates continual change it needs to align with other kinds of standards particularly associated with security and architecture it needs to be maintainable I think I might be lost I've got you yeah you dropped for a moment but you came back quite quickly foreign cool sorry um now it's got to interface with other systems and processes it really needs to consider human factors particularly around adoption and use um so there's this whole other world of additional factors that really need to be considered and to be frank these days with the solutions that we build um there's probably 10 of the effort even less five percent of the effort that goes into building the AI model and often it's like 90 of the effort actually goes to enabling that model to have an impact in the real world now uh there are some really uh useful important approaches when thinking about this one that everybody here I imagine it's going to be familiar with some extensions in Ops is the activity of Opera professionalizing a model so I've built a model trained it it performs well well against a particular kind of simplified heuristic that I have but then I need to operationalize it that model needs to live it needs to be supported with the systems automated data pipelines for indigestion uh pipelines for load balancing inference streaming inference potentially all your models need to be monitored your data pipelines need to be monitored for draft potentially we want to have automated training and testing and redeployment so there's these technical considerations which are the first layer around your AI model to make sure that it can live continuously but the most significant factor and what I would say when I think about this case is actually the business process that this model exists to change it's one thing to build a model but if there isn't a business process or a person or something that this model is exists to change and if the business process doesn't exist to interact with the outputs of the model no one's going to use your model no one is going to adopt your model so actually the best practice is start with the business change or oh no Roshan I think we may have lost you I hope that we've been oh yeah distantly the AI model is just an enabler to that change but actually the solution needs to be driven from the front from the business use case don't ever start with the technology that's my one big big message don't start with the tech start with the process that you're looking to change great thank you very much for Sean uh I hope that we missed only the most minor moments in that I think it was I understood it so that's uh that should be all good um and we're going to move on just uh to another question now um and that question is for Luca uh Luca has informed us that approximately 85 of all machine learning projects in fact fail so I'm interested to know what that means for the future of project management it's a rate like that yeah uh it's it's it's it's bad like I don't know if if you went to a store and just before you go to the store to buy an Apple you were told that 85 of the apples you know they make you sick then that's like quite concerning and I feel like I feel like there's a lot of talk about um uh you know all the fancy capabilities of AI and a lot of discourses around it but we're definitely pretty far off from being able to actually Implement things with AI well and that's that's wild to me and I think Rashaan definitely raised a good point which is that people they like to think about and talk about the technology and in actual fact the business process is often a bigger sort of bottleneck or roadblock I think that's yeah that's kind of what I was saying I definitely very much agree um I wish I knew more about why exactly data science projects fail so often um the one thing that I've known from personal experience is that it's very hard to it's very easy to start going down the wrong path in data science it's very easy to think oh okay I think I found the solution and this is the way to go and be wrong about that um and it's even easier to do that when you have like the pressures of a deadline and certain stakeholders you know wanting certain things so I can definitely that's definitely an additional constraint that doesn't often happen in other um kinds of automation projects is that in data science to get a correct answer you have to be walking really slowly and carefully and there are all these incentives built around making that impossible and making that very hard um but I don't know why and I would be super curious to hear what other people think about why that statistic is so high yeah absolutely um Athen did you have any thoughts on that and the perhaps related back as well to some of the use case scenarios that rasham was referencing yeah sure um yeah having done a few of these in my experience it would be often it's an ill post problem um and that's exactly back to the point rashan was making um there is the classic one of the the technologist who loves a particular XYZ technology our Transformers are the hottest new thing language models are the hottest new thing um when you've got a hammer everything's a nail um so that's that's one of them and another one is just um wanting yeah like having um having an interest to yeah install AI um bring a particular technique but not being willing to really do the hard yards as Lucas said um I think doing a lot of due diligence on the data itself because your system is only ever as good as the data that goes into it and often people just want to plug straight into a model and this is one of the things I think that anyone who's successfully built an ml product project knows is you spend a huge amount of time doing things around the model and the model is almost the cherry on top oftentimes with the amount of work and development that's gone into these tools it's quite plug and play um but it's all of the customization around understanding the problem understanding the business objectives understanding how you can get the most out of your data whether you even have enough data um and one other thing I'd like to raise is understanding whether your model is actually robust and whether you can actually know anything about the decisions your model has made King and those two things are explainability so if you feed some things into your model what's it paying attention to because there are things called confounding factors and especially in healthcare this is a problem right um if you've got gaps in your data you might actually be making a prediction about something based on the wrong piece of information if you are reading Radiology scans and you're looking for lung cancer or for covert um you might have found that you've compiled two data sets and one is from a hospital where everyone was intense it was in intensive care and another was from a hospital where people were in remission or you know in a much lower prevalence of Co of whatever the disease in the lung but there might be some little artifact on both of those images there might be a logo right and so if you've got an attention the ability to see what an image model is looking at you might realize that oh it's cheating it's looking at the logo and you definitely don't want to deploy a system like this to production because as soon as you introduce that third Clinic you're going to get very unpredictable results yeah absolutely uh does anybody have any quick responses to that okay that was sorry too slow on the buzzer in case anybody does yeah um okay sorry I was mute um well based on my experience on working on different projects but I have found is like it's because of two main reasons the number one reason is uh we have high expectations number two number two is uh poor planning okay so what happens when we we work with AI because we think like the AI researchers will work on it and the model with the comma will be 100 perfect and there will be no flaws and like so that's something which we miss when we do the planning another thing is like sometime there are two parts of the the picture here like this process is training testing and deployment so that that is three different stages we spend more time on training the data we spend more time uh and when the training results come and we are all are very excited it's 100 it's showing me more than 90 accuracy and the training phase it should be working absolutely fine we do some evaluation and that that's it but actually the the problem with most of the AI system is how generalized they are when we put it in on a testing system or unknown data that's where the real things start and that's a deployment issue so that's why what is happening in Industries like Industries is actually adopting a new framework AI Ops so aiops is coming from devops like where this how the softwares are developed and how AI operations will be developed and deployed because we should focus more on the deployment side how generalizers these systems are and I think for example we have to have a very thorough testing just like in uh in traditional software development life cycle we spend like more than 50 on the maintenance and testing exactly similar way when we do deploy a model and test it we should be spending more time on the testing evaluation thorough evaluation on unseen data and then I think we can fix this issue okay yeah thank you very much and well and that actually leads us uh really well into uh the next question that we do have uh which is uh it's a question for everybody I'm gonna go through you one by one really quickly um because I know everybody here would be able to talk about this until the cows come home the AI generated cows uh but I'd like to know how we all envision the job market in say 2050 uh with the integration of advanced AI especially with regards to some of these discussions that we've just kicked off about uh the augmentation rather than replacement of human capabilities and the ability to integrate kind of critical thinking uh and how we could find a successful collaborative balance uh between those I'm gonna throw to Luca first in 50 years no programmers I'm going to say it not a single programmer no problem anymore Luca 2050 is not 50 years away okay it is it is yeah it's it's 27 years away no no I'm still standing by it no programmers you just sit there and you like you say hmm I'd like this and then the computer writes it for you yeah yeah you don't even speak you put all of those helmets on then you just envisage what you want and that way you can you can express because that's my take no programmers we're getting the ax get out while you can don't learn computer science learn something else interpreting okay uh all right well uh I think personally as an employee of it Masters that you should probably learn computer science but I'm interested to hear what Athen has to say on the topic uh yeah there probably won't be too many programmers you're probably just you know like Lucas said but then there'll still be a need for system Architects there'll still be a need for people with ideas um who want to create things that don't exist if something exists um already then sure these systems they're basically just search engines they're basically just going to look up and mix together a bunch of ideas that are out there in the world but I think we have a unique privilege still in that we have agency and we have things that drive us in the world to create a new and every one of us I think is quite individual and different in that regard and so I think we all bring something that a system that's just trained on human data that's already happened can't bring but in 50 no in in 2050 in 27 years I think that um Mission critical systems will be controlled and decided on by AI I think that I saw there was something about surgery there and so I'll lean into that I think that um surgeries will be performed completely by Ai and probably with much higher success rates um diagnosis of glaucoma of cancer of genetic abnormalities of gut disorders of cause and effect relationships that we've not been able to pick up yet all of these things I think we're going to actually shine a light on and that's what I'm most excited about you know that's really interesting yes animal Place uh yes uh if I will look with the help of through the lens of his history like what happens in engineering right so if you look at the manufacturing fields and in a manufacturing engineering we get the automation so well the automation we still have robotics arms and everything which is coming up a lot of automation has been introduced but have we totally removed human in the loop no so I think that's happened with the AI systems as well if we look at AI every time what happens like whenever there is a new development person are so excited that they start challenging the future in next five years everything will be like this which is not gonna happen so for example the the father of AI Marvin miniski when he first started with the AI concept and other things he said next 30 year everything will be we will pass the Turing test we will do everything and everything will be done and then at MIT what they happen with the computer vision they had the very first project they they started a computer vision at just a machine uh just a class assignment they they gave that assignment to the students and that assessment was uh build a robot which can collect like some of the the the blocks and arrange them where they help a robot exam and camera and none of and none of the students were able to do it and what happened is like even after 30 years we were able to do it because there was a like we were super excited at that time we didn't think like even the the small principles of vision are not yet there okay similarly in AI we haven't achieved we have achieved some AI which is really good in single tasks but we haven't achieved super uh like multitasking okay for example AI system are very good in one task but not in all of the tasks that's one limitation another thing we have achieved AI but we haven't achieved AGI which is artificial general intelligence okay so if we yes we will but it may take long more time that what we are predicting at the moment so we need to be a bit realistic when we say yes humans are there they will always be in the loop the reason uh the number one reason sorry for the time but uh I there's very important uh point to make the AI system don't have the Consciousness okay they don't have the self-awareness these two things are really primary as long as we are not getting these two capabilities in AI systems claiming anything like they will replace everything all the workforce and everything I think it's not realistic that's my point of view thank you great okay I like this uh difference of opinions in the panel rashan how about you um uh I'm definitely an optimist and uh I do feel pretty strongly that I think the impact that AI is going to have increasingly across every discipline and domain is going to be really quite significant and profound and I made that in the same sense that the internet and digitization has really had a meaningful and profound impact and I think that some of us can forget how profound that impact is if we look back 20 years ago even from today and we think about the impact that the internet has said in our lives many of us would describe uh the world we live in today as Pure Fantasy so I do think there is an element of we we can't even imagine what it will be like in 20 years time because the change will be so profound having said that I'm not a I don't think we'll have AGI in that time I definitely definitely don't think that although Jeffrey Hinton sure gave me a scare when he was talking about the possibility of it happening in 50 years but why are you so convinced that we won't have AGI um I'll give the the less exciting answer which is I think um I think will continually change the definition or the threshold of what we think ADI is to get to that particular threshold I think our definition of what we might say AGI is today we might reach that but then we're all going to say well that's actually not what AGI is Agi is actually this and we'll come up with a new threshold for what that is I've also what I've seen out of open Ai and and listening to some of the testers of openai and some of the novel problems that they gave open AI it was certainly able to come up with creative answers that demonstrated a real awareness of uh things that demonstrated a kind of understanding that surpassed and like blooming out of the water surpassed any kind of wrote algorithmic linear kind of um you know replication of outputs but that's why I say that but but I'll go back to the answer to say one thing this or two things one of them is iOS live on high quality data they let them breathe high quality data without high quality data it doesn't matter how sophisticated your model is we've got nothing so in a world of AI or when everybody's digging gold mines you want to be the guy selling the shuttle and the world of where everyone's trying to build AI you want to be the guy who's creating high quality data so I think data governance is going to be a much bigger industry there's going to be a huge amount of spending going into Data governance particularly for a lot of our largest Enterprises that have Legacy Data Systems going about 20 30 years they don't have a hope of jumping onto the AI Bank we're again until they really investmentally in data Governors to get their data quality up to a certain standard that meets the requirements of meaningful AI the other thing that I would say where AI does live and breathe data and where it is going to pop up everywhere that means we're going to see an even faster acceleration of the digitization of everything it's if you need to be digitized the data the systems the processes the things that we interact with need to be digitized in order for AI to meaningfully interact or influence with them that influence them so from that perspective I I hate to say this but Mark Zuckerberg's kind of vision of the metaverse and I don't think it's actually too wrong I don't think it's the next big thing but in 20 years time I think we're going to see a lot more of that metaverse kind of thing in the world partly driven by the digitization of everything in order to be able to uh integrate it with artificial intelligence right I did not have Mark Zuckerberg was right on my bingo sheet for tonight anybody who did it please check the box for yourself um thank you very much everybody for those answers we're going to move on to our first question from the Q a section uh which is uh gonna start with Anwar on this um asking with AI enhanced research capabilities how can universities secure funding and Partnerships in an increasingly competitive environment and how might this affect traditional research Grant models and why I'm use it again we're muting when we're not uh on CE thank you um well I think that's a great question because as a as a researcher uh working in um individually on any research problem that's a challenge which I face as well because the way we have to compute compete with the industry uh for example Google Microsoft and does all the Innovations coming in AI uh it's a really challenge for for people working in Academia the reason because we don't have those resources in terms of computation in terms of work for a force as well where for example I give an example of like submitting a research work into conferences so most of the time when you submit your research work you are claiming like okay you have achieved this much accuracy you have achieved this and that on this data set and that's the Benchmark something now for the same Benchmark everyone is competing around the world your uh Microsoft is competing researchers from Google are competing researchers from all the top companies are competing and at the same time a PhD student working in in a lab with no facilities is also working the same so when he has to publish his work he has to compete against the big giants and Industry and that's a big big challenge I think which is coming up so that's why for example if we talk about large language models what now we we are changing our strategy in the Academia like we we don't have those resources the way we can actually feed them like maybe we focus more on the Innovation side uh and less in terms of competing in terms of metrics uh that's one way we can address it the other thing is like what we can come so definitely ideas can come from anywhere not only from the mainstream industry but but from the students as well so that's where we can come up with uh so I think uh I I completely understand that this is a big Challenge and we we all face it in Academia at the moment Brian interesting uh does anybody else have any uh particular thoughts on the challenges for academics and uh you know related uh challenges I don't think I have an educated opinion um it isn't a space that I'm terribly familiar with but I think some of the themes or ideas that come to mind um creativity and the value of Novel and creative thinking the ability to synthesize creative thought I think is going to become an increasingly valuable commodity and an important part of [Music] um all competitive environments actually not just an academic one but really all right I mean how do you how do you get more creativity that's that's a great question that I've had the answer to that is yeah that's a quite a larger question possibly larger than uh this uh short one letter in their panel uh Luca I know that you've done some work with universities did you have any thoughts to finish us off on this question um I definitely can't speak to universities I have never been on the inside on that one unfortunately um I do know that there's there's sort of there's always an expectation when you do sort of a smaller project that you're going to achieve the kind of results that very large companies can achieve you know someone has seen open Ai and they're like oh I want some of that here's here's five thousand dollars um but you know usually you can you can still give them something good by using you know using open AIS apis so I guess what I would say is like you know there's this nice there's this um there's this story about uh all the birds competing to be the bird that gets to be the highest the king of all the birds and they have a competition and um they decide that whoever flies the highest becomes the king and the eagle is flying the highest looks like he's gonna win and then right at the end there's like a little Wren sitting on top of the Eagles back and it just jumps off the back and flies slightly high to the heel and that way you know it becomes the king of all the birds uh and I guess this is kind of how I feel as a data engineer today doing small projects I'm I'm the little Robin sitting on top of open AI sitting on top of Microsoft etc etc is enough oh um okay well uh we have another question uh directed towards rashan at this time uh which is working for a very large company my company can see the huge benefits of using AI however they're also very fearful of where any information inputted into that AI will end up how can companies use AI whilst keeping sensitive information secure yep um the answer to that is to invest in uh a great uh program around cyber security and data governance these are these are established domains you can find experts who can support you to build a program uh to secure your data um security Architects you know finding the people with the right and appropriate skills who can help support you to meet your goals around data security um the other thing that I'll add is the best security architect the best data governance and the best cyber security program is nothing in the face of one determined human being so what I would also say or or sometimes one mistakenly informed human being so another really important thing I think is in trying to enable a culture of data literacy one of the things I'm really focused on a big part of my role is actually elevating the data literacy of everyone at Australia Post right helping everyone to get a little bit better to understand how to use data more effectively make sure that they understand what are the risks when they use data um to maybe pause and think before going and using one of these tools so you know they say what is it culture it's strategy for breakfast that's also another really important thing so those are some of the different kinds of considerations [Music] that might help right thank you and uh slightly ominous but strangely inspiring honestly um we've got another question here uh which is directed at Athen uh what are your thoughts on the ethics and practice of Equitable access to AI Technologies so that we bring everyone along for the journey instead of increasing the Divide between richer and poorer demographics older and younger larger corporates and smaller businesses for example that's a really interesting one um I would say that I have learned almost everything I know in this field well one from from open resources in terms of textbooks and things but also mostly in terms of the open source community and I think all I assume all of our panelists can agree there almost everything that we work with is building on the shoulders of giants um and the Giants sometimes it's the four maintainers of scikit-learn who uh so that's like a library that almost every ml engineer is going to use um there's four people keep this thing going um with a very limited funding um and then billions and billions of dollars of corporate investment and resources um and well revenue is comes off the back of these tools that are built for everybody and so I think this is one of the marvels of the modern world um I think the way we got in the last 20 years to the internet and this incredible technological change from there to now a lot of that was also off the back of Open Source technology open source programming and I think we should see more and more of that as time goes on that's not only going to look like sharing of code it's probably going to look like sharing of models that are born more General and more and more extensible easier to trade with limited data um and hosted in a way that you can access them because they're often very large they require large amounts of compute if you do want to just give something a try um hugging face I think is a really cool initiative um that's a company that takes huge models hosts them for you and you can test things but I don't want to see world cities I don't want to see moats I um don't want to see scare campaigns from people saying oh no we need to control the AI because we're the only ones who are benevolent and know how things should work and we've seen some press releases from open AI to look a bit like that but they're actually what they're scared of is is an artificial moat being knocked down because they there is no mode here the the cat's out of the bag the ideas are out in the world and I think we should all get as literate as possible in this stuff both on the data side and on the modeling side and get curious and see what you can build yeah fantastic I I love the in my very very biased opinion loves the focus on data literacy from everybody uh does anybody else have any uh thoughts on that really quickly huh yeah I think another thing which is important is diversity um because if we want to look at the fairness of data and fairness of the people who are involved in developing AI that's very important because uh what we have seen um the problem is data bias so because the data is coming from it's not diverse enough so you don't because the machines uh the system which we are going to train they are data hungry and they actually reflect what we train so if we are putting garbage in garbage out so the data diversity is very important aspect if you want to build AI systems which are really working for everyone the other aspect is the human inclusion which is like when we include human in the loop again they should be diverse as well for example uh like if the developer Community is from a certain community members only for example maybe in terms of gender if we say okay no inclusion of women in the development so they may think in a different way because the design of these system is like the the brains which are going to design them and it's it's more likely like they're gonna think about themselves and they design it and ignore any any other community which they are not engaged with so I think the inclusion of a human Workforce and the development with good diversity is also vital in terms of achieving these goals thank you fantastic thank you very much as the only woman on the panel who is not also the only non-ai and professional um very interesting uh points there as well I've got another question for Luca uh which I think follows on really nicely from that which is how are we going to rely on governments to legislate on AI if they don't have the knowledge to understand the impact of the AI it's it's gonna go badly I mean yeah I there's definitely a lot of enthusiasm um in the sort of higher echelons of society people on the high level making high level decisions um but you know even the technicians have no idea what's going to happen or how to regulate something like this I guess this is kind of what I think I think Apple was alluding to earlier like there's a huge democratization of the capabilities that you know that'll happen over the next five years we're not going to be able to prevent someone with bad intentions from getting their hands on this stuff um or at least we don't have the Frameworks there now uh so that's a really really hard question I don't think that we're up to the task at this stage um and particularly people who don't really understand the technology as well absolutely uh any other quick comments on that question um well I am yeah what I think is uh it's very challenging as well for example taking a scenario for driverless cars sometimes we have ethical dilemmas example when the car is driving autonomously and there's a sudden crash okay so in that scenario like whether they it will take care of their passengers uh so it will take care of the safety of the passenger whether it will take it Asians like uh make it easier to to take care of The Pedestrian like if it has to make one design out of it so sometimes we have ethical dilemmas like uh which are very very complicated and uh as long as we are dealing with the governments every government has a different perspective on ethics based on their culture based on the practices they are doing in the society so because there's a lot of diversity in our government the type of decision making we do type of political systems which we have So based on that I think it will be very difficult to to agree on a very generalized kind of a framework which works for the whole world uh it will be pretty much like based on the the individual countries and individual areas for example if the European Union thinks like that's a different kind of a story we want to really implement this kind of a strategy that may be different from what what the Australia does so I think it will be pretty much local and it will be very difficult to implement something uh on a global scale interesting uh rashan I uh it looked for a moment like you had something burning to say oh look I was really just enjoying the responses here because um of the same one that it will be very difficult but I do feel that as with all regulation it's a compromise it's about compromises and trade-offs and I feel strongly that old though we may not find a perfect or the right solution I do feel strongly that we should try I'm I I have real concerns I think I would say well I do feel very strongly about uh the need to democratize the benefits that are coming from AI I really think that these benefits are going to be outsized and significant and I think it would be a real shame if we weren't able to support every Australian or every person in the world from the technology fantastic um okay and so we have uh our final audience q a question this is to everybody but I am going to throw uh to someone who I'm about to decide first uh the question is what about military grade AI or AI being developed by governments behind the scenes what are the dangers from these types of AI and do they represent the real existential threat I am going to start with Athen on that one heavy topic yeah just to throw you right in the deep end yeah no it's scary stuff um the author of my favorite AI textbook actually came out publicly with some videos on this stuff two or three years ago and it was about um AI enabled drones I think we've seen Black Mirror episodes that look something like this and now actually in Ukraine as far as I understand I don't know if there's any autonomous um actual deployment of weapons that's happening but drones are playing a big part in that conflict and it's only a matter of time before we've got machines that are actually deciding whether or not to and then and there to kill someone I think that's the biggest legislative push and the the kind of um similar to the referendum or whatever against chemical Weaponry in the 80s I don't know my history that well but there was a push to say that we shouldn't have machines making any decision about who should live or who should die me personally I don't actually advocate for anybody being killed on almost any circumstance I think that's atrocious that's my personal philosophy but um in terms of you know the world that we live in and the need for con the the the not the need but the the um situation of conflict the one other thing I'd say is that we actually already have systems since the cold war that could obliterate us and we would need to do a lot more building and Engineering to get to an AI system that could go anywhere near a nuclear warhead and we have thousands of them and we point them at one another and so in terms of existential risk I'd actually say that yes like we want to keep an eye on this but um yeah yeah I think we've we've actually got other other problems as well so uh one one last point would be let's use these systems to actually Enlighten one another and realize that it's probably not good to try and kill each other controversial take it's probably not good to try and kill each other um you may be making some enemies there um thank you very much for that answer um and yeah quite quite poetic even to be honest um I'm gonna throw to Anwar really quickly on this um thanks I think it's a very dangerous space to be honest uh and uh there's definitely it's a scary picture in front of us because what happened with the drones technology for instance so all the Warfare has changed after the the introduction of drones because we used to have F16 of 35 or something like going in so now everything is more autonomous and even with the Drone technology many people has like we have seen the movie Eye in the Sky and you can see that scary picture like uh sometimes it's the people are uh the uh the people who are targeted they they are not actually criminal they are not actually so they are some innocent people uh so similarly with the AI when the AI is also involved additionally into that autonomous thing whether it can make a decision of its own I think that will be even more dangerous so in that space that that's one aspect but another thing is like when we talk about generative AI there is a possibility like we get a totally new kind of threat which we had a new kind of weapon a new kind of Weaponry system which we have never thought of so that's also possible in future and uh that's a real threat for Humanity maybe something comes up uh which is truly a like something uncheekable and for example we are talking about autonomous robots and uh what if they themselves uh take control of themselves and they are not no more in our control so that will be another kind of scary picture so I think that there's something where we need to put more regulations in uh so that we can have a safe uh future ahead what a minute absolutely uh Rashad yeah um
2023-09-12 18:01