PER Live: Artificial Intelligence: Implications for Business Strategy

PER Live: Artificial Intelligence: Implications for Business Strategy

Show Video

good afternoon and welcome to the department of economics program for economic research virtual life series i'm sophia johnson with the program for economic research one of the world's leading programs for identifying opportunities and strategies for enhancing economic research thanks for being with us today this event will be live streamed by the economic department's youtube channel the conversation will be recorded and closed captions will be provided in the days following by attending all conference participants agreed to abide by the event's code of conduct which is posted on our website econ.columbia.edu forward slash per as always the program for economic research in the department of economics at columbia university takes no institutional position on matters of policy a little housekeeping before we begin each presenter will speak for 10 to 12 minutes in this case we have a keynote speaker the presenter has agreed to take all your questions live or in the chat at the end of his presentation and you may unmute yourself and ask the question or simply type your question in the chat box if you're joining us on the live stream the live youtube feed you can post your questions you can also add them on social media and we will incorporate that into today's discussion lauren close the program manager here at per is joining us as well she'll be adding updates on social media as well as monitoring and posting comments in the chat box during our conversation today we invite you to follow us on facebook twitter and linkedin for this event we're using the hashtag per live series so artificial intelligence the implications for business strategy we start this hour defining the many different entry points for ai in the global economy and exploring how far we have come on the path toward achieving this vision of future productivity through ai and the ways organizations can improve their odds of success our keynote speaker is dr chen k lee professor and associate chair in the department of computer science and engineering at the university of texas at arlington dr talia plemetez of wayfair will not be joining us today dr lee is the director of the innovative data intelligence research lab at ut arlington where the focus is on building impactful interdisciplinary research in several areas related to big data intelligence and data science his research interests include data management data mining natural language language processing machine learning and their implications in computational journalism dr lee pioneered the field of computational fact checking and has produced a significant body of research on topics related to knowledge graphs he has also led a 20 organization and multi-disciplinary team of researchers and partners from academia government and industry to conduct nsf convergence accelerator projects on credible open knowledge network which really aims to ensure the credibility of decision making software powered by knowledge graphs professor lee welcome to the econ live series now to begin research on the economics of artificial intelligence almost exclusively focuses on the potentially transformative economic purposes of ai this thesis that it may significantly lower the cost of prediction however only 10 percent of companies buy the story that they can obtain significant financial benefits from artificial intelligence technologies why so few oh hi uh sofia so um first of all uh thank you for having me here it's my honor and a great pleasure to have this opportunity to speak to you all so how does this go should i uh address your question first sure you can certainly start with the question why there's so few and then move right into your discussion if that works sure um i think you know the term artificial intelligence of course nowadays um it is a focal point of our conversation in in the society everyone is excited about it everyone is talking about it and you know this term can refer to very broad you know subjects and scenes it can pretty much refer to everything related to computer science but in narrower sense you know what makes people most excited about um you know artificial intelligence is really the development of and advancement of machine learning deep learning uh technologies and their applications um in in the past uh about 10 years and really breakthroughs there so you know i think there are several reasons artificial intelligence isn't nothing new at all right it actually started as a discipline 50 actually longer than that 70 years ago pretty much at the same time when computer science started to uh form as a discipline and for a long time people really didn't make important breakthrough and but it started to take off really quickly in recent years for multiple reasons one is that nowadays we have data everywhere our capacity of generating data and collecting data and analyzing data is much greater than before and so that's number one because data is everywhere and number two uh it's because of the advancement of uh you know computing infrastructure and hardware uh including you know cpu gpu larger memory and storage cloud computing technologies and so on and then number three of course uh you know there there were algorithmic advancement from computer science uh you know deep learning particularly um so it's natural that uh you know every business feels the urgency of adopting ai and to be prepared uh to um you know for for impact created by ai advancement and so i you know i do not have firsthand experience with regard to the statistics you mentioned you know 10 or 15 i'm pretty sure those of you in the audience know that number uh you know in deeper sense than than i do but my guess is that you know there could be multiple reasons number one is it really requires resources and the talent in order to tap into the capacity of artificial intelligence right and there's also needs to be a realistic need if you are a small says you know personal business and you don't really have large amount of data from which you can gain insights then you know the chances are there's less you can leverage by using artificial intelligence and you do not really have the resources to carry out whatever insights or decisions you can make based on artificial intelligence yeah but i think for corporates and large businesses it's not really you know a desire it's really a must to have investment in ai technologies in order to stay cutting edge and competitive in their businesses i hope to some extent that addresses your question thank you so i'll turn the floor over to you now to begin your presentation and we'll take questions at the end cool so i prepared some slides to basically summarize some of the projects that i've been doing in the past say decade um i understand this is a very brief uh presentation so we'll i'll not go into technical details so you may hear quite some jargons so bear me with that yeah and i understand that there was another uh originally there was another panelist they couldn't participate today so in syria i may have a little more time so i'll indulge myself to to go beyond the originally assigned 10 minutes to me and i have a few slides and after that i can share a video with you about a project that i recently worked on excellent thank you sure let me share the slides with you just one moment all right um are you able to see the size yes perfectly right so a lot of the focus that my research group has in our research is on you know misinformation and fake news and more specifically developing computing tools to help fact checkers and reporters and the public in tackling misinformation and i don't need to explain what is misinformation fake news you know i'm pretty sure all of you have heard of it and you know this has become a increasingly important challenge to our society right and here are just some statistics and to just remind you about the impact misinformation has on our society and the economy as well right um and if you're not familiar with uh what people are doing in tackling um misinformation i just want to mention a few organizations you know the fact checkers washington post politifact new york times and factcheck.org and so on there are many more um so they are working diligently in basically vetting factual claims made by people and organizations and inform the public about the truthfulness of those factual claims however they are not uh able to keep up with the large amount of information that is spread you know spreading online uh for instance we have learned that typically a fact checker will need to spend between several hours and a day or even longer to really investigate a piece of misinformation and write about it and publish it and so you know you can compare this with the huge number of you know information online including a large portion of the social media post on twitter and so on that you know that that are spreading misinformation right so that's a huge challenge um you know in terms of a scale for fact checkers to deal with and here's uh just a more concrete example to explain why fact checking is something that is not trivial and takes time right um of course i'm only using this as a example that's available from politifact there's no there's no personal political stance here this is a factual claim made by mitt romney a few years ago he said our navy is smaller than it's been since 1917. um so it seems that he was referring to statistics about number you know battleships in the united states navy since its creation in 1916. so if you look at the numbers indeed uh the number of battleships in 2009 or in 2012 that was when the claim was made the number of battleship at that time was close to historical law right so and so in that sense if you literally interpret this chart and his claim you would say his claim is largely true however his claim was rated by this fact-checking website called the play defect rated by politifact as pens on fire and the reason is the claim essentially was comparing the battleships 100 years ago with the modern aircraft carriers right and so you cannot just look at the number of battleships in in in measuring the streams of navy so this is an example that demonstrates the subtleness and you know complexity in fact checking oftentimes it's not about uh you know looking up you know data from somewhere and that's whether it checks out and then you actually need to have deeper understanding of the context of it yeah so we have been working on this uh project called called the claim buster for the last seven or eight years and our goal is to automate the process of fact checking um and we say this is toward the holy grail of automated fact checking and this is just to say that this problem itself is is to some extent overwhelming and the daunting right and uh we are not really close to a truly automated uh you know fact-checking system that that can tell you immediately whether something is true or false i mean we can do that to some to a very limited extent on certain types of claims but largely um this requires this is the example where artificial intelligence systems and human workers need to collaborate together in order to achieve certain goals but nevertheless we have made progress in some of the directions where automation is more possible and more effective and particularly we looked at how we may help fact checkers in deciding what to fact-check so we call this problem the claim spotting problem so this is a screenshot of a file that politifact called the buffet of factual claims basically they had interns collecting factual claims made by people and appear on various uh you know tv programs news outlets and so on and then they highlight those factual claims and then they decide which ones to further investigate so our tool aims to develop a machine learning algorithm that is able to rank factual claims and so that fact checkers can look at that ranked list and focus on top-ranked factual claims because the top-ranked ones are more likely to be the ones that deserve their attention and they have limited bandwidths and so it's really important for them to be selective in focusing on what to fact-check so you know this might be the most technical uh slide in my presentation so i hope that's acceptable so this is a fairly typical supervised learning um task so basically we looked at you know transcripts from all the past general election presidential debates and then we look at all the sentences we use human experts to annotate those sentences in terms of whether a sentence is a factual claim that is worth checking or not so by doing that we we annotated 23 000 sentences um and that's the collection of ground choose that we have and so we have various machine learning models learned from this ground tools and then this is our claim buster clean spotting model now you can apply this model on a live event or apply that on a web page or news article and then this model will be able to rank sentences in in that piece of text and recommend uh sentences that are recommended factual claims that are worth checking to the fact-checkers yeah so for instance we well i'll talk about that later just a little you know um slide to give you examples of you know what are factual claims what are factual claims that are important and worth checking so you know for instance i was in iowa yesterday this is actual claim but most likely this is something that is not important or worth checking you know i ate a burger yesterday you know and so on and and there are factual there's you know well there are statements that are not factual claims they could be opinions or questions and so on right something like i will be tough on crime and that's really a pledge but that's not a factual claim uh or seven seven percent of unemployment is too high and you know that's an opinion that's not a factual claim and on the other hand the top three sentences in this slide these are considered important factual claims by our human annotators um so the machine learning algorithms look at these annotated sentences and figure out the signals from these sentences and then build a model so that when the model is applied on future sentences it can make a prediction with regard to whether that sentence should be fact checked or not so we have built a public uh api um basically this is a code base where programmers can tap into they build their fact-checking programs that can call our api if they have a statement they can call our api and our api will tell their program whether that statement is worth checking or not so they can use this in for various purposes um and you know our code base is also publicly available and you know we welcome people to work together with us in developing this and contributing to this so if you go to our project website you will find more details um this is just to show a pictures that i took uh i think six years ago when we worked on using this tool to fact check presidential debates this was in 2016 we have some device connected to our tv and we take closed captions from the device and then run claim buster models on the close closed caption and then claim buster will will figure out what um you know sentences are worth checking and so you can imagine fact checkers can look into this and decide what to fact check at that moment and we have been applying claim buster on all past presidential debates um you can find the details about this on our uh project website um this has been used by um fact checkers um you know there there's a news article from washington post about how they pick up a lead recommended by claim buster so you know the fact-checker said it would have been lost to history if it had not been full claim buster and so this is a screenshot of uh you know newsletter or alert email that that is produced by duke reporters lab they they apply claim buster on transcripts of tv programs social media and uh and even congressional records and then they find those highly important factual claims and then they compile those claims and send those to fact checkers and so to some extent you see this this is a match of the politifact buffet that we i showed earlier and this shows that claim buster to some extent can be used to do the work that the intern was doing um you know for for politifact we are also using um claimbuster to monitor factual claims people made on on social media particularly in twitter so if you go to this project website you will find the factual claims made by all major politicians you know house representatives senators presidents and so on and uh you look at the claims they made you can look at the scores that the claim buster gave on their claims in terms of whether they are worth checking or not uh there are some other tools for instance there there's a browser plugin that can flag health misinformation uh related to covet 19 and so on we also build a dashboard for for tracking code with the related misinformation so if you click at one place it will show tweets from the government officials but it will also show what kind of uh factual statements and misinformation has been spread by you know people from from that area yeah well uh i have some more slides that i can use uh maybe in responding to some of your questions uh so but i'll stop uh you know the running of the slides at this moment um do we still have time for showing that video sophia sure we certainly have time for the video and then maybe we could talk a little bit about the economic uh its reaches the reaches of of um of the work you're doing in the economic space you know hypothetically and then we could open it up for questions sure so let me share let me share that video with you kn is a project in the national science foundation credible open knowledge network cokn is a project in the national science foundation's convergence accelerator program good data enables good decisions the open knowledge networks created by the convergence accelerator projects are a public knowledge infrastructure but will the networks be vulnerable to inaccurate information this is a real concern as evidenced by many poor and disastrous decisions caused by bad data and misinformation cokn is a suite of frameworks and tools it helps software developers and domain experts build credible decision making software powered by open knowledge networks let's consider an important use case vaccine misinformation exposure to misinformation made many parents delay or refuse vaccines for their children this led to completely preventable disease outbreaks such as a measles resurgence one in ten infants were unvaccinated in 2016.

vaccine hesitancy is a top 10 threat to global health today parents are given brochures called vaccine information statements from the cdc these brochures are accurate but in practice they help very little they often do not address the parents specific concerns a parent may be worried about autism but the brochure doesn't even mention it its language and style may be unappealing or confusing and for folks that don't trust the government a statement from the cdc often won't help so what should we do our insight is that the brochure must be contextualized for it to have a better chance of being perceived as relevant or credible we are building an app to help healthcare workers imagine you are a nurse speaking with a hesitant parent quickly you can enter the parents concerns you'll get back the key points you can make based on the parents electronic medical records the app will recommend contextualized interventions for example if a parent is worried about autism and doesn't trust mainstream media the app could recommend a video featuring a pro-vaccine mother of a child with autism but if a parent has read specific research connecting vaccines and autism the app would suggest articles exposing bad research and retracted publications for a second application consider cyber security and how software and hardware vulnerabilities are currently addressed the standard practice is to examine a ranked list of vulnerabilities reported by various sources given limited resources the analysts within an organization would fix top ranked vulnerabilities first but the risk scores ignore the organization's particular context for instance a vulnerability may pose high risk because attackers can exploit it over the internet but in a power grid a device with this vulnerability may not even be connected online without this context poor choices can lead to severe financial and even human loss such as an unnecessary shutdown of a power grid to solve the problem we are building a tool to identify and explain credible threats based on each and system deployment profile the cokn framework and tools draw insights from these applications they will be important across domains software developers and domain experts will use cokn to improve objective credibility which is about fact accuracy the tools will help fix bad data and explain query and analytics results but credibility is more than data accuracy cokn also helps developers improve the subjective credibility of their software because user perceptions are critical to making information convincing the framework includes strategies to contextualize results from queries and analytics to match decision makers needs and profiles cokn captures not only facts but also inaccuracies and it models user profiles it uses taxonomies to semantically connect the knowledge graphs and user profiles to achieve contextualization we've already developed some taxonomies and knowledge graphs for vaccine misinformation and security vulnerabilities our interdisciplinary team includes computer scientists social scientists and application domain experts from academia industry and government to create real world impact we'll work with our partners and other teams to apply test and deploy our technologies credibility is key to sustaining open knowledge networks and thus empowering the american people without cokn we risk going back to square one with low quality data and poor insights to drive decisions this is great thank you so tell us professor lee thank you so much for your presentation and your contribution your insights into uh really this remarkable space you know claim buster i feel like somewhere in the future this might be uh something that we can all download on our computer for example uh for use at home at school and and so it really is um a testament to sort of where we are going with the global economy i wanted to talk a little bit about to speak to the questions because the questions started coming in on social media um the one of you know it seems we have a lot of economists in the space and what part of the of the business would benefit most from applying uh some of the work that you're doing what what is it are we talking about the supply chain the logistics uh the financial sector um how should students think about um a collaboration in this space like how you know what should that relationship look like that's a that's a very good uh question i guess there are multiple questions so then you try to address some of these um so in terms of uh you know the the research that i've been working on and what this uh project cokn and as well as a claim buster is about um pretty much you can say that missing misinformation now is everywhere in in every business sector and i don't know if you were able to see clearly the several examples in one of the frame in that video in which we can see examples of misinformation uh across everywhere um so it's uh well just one moment so i'm yeah so i'm finding that slide so i can refer to that you know there's misinformation related to vaccination there's misinformation related to misleading map um there's misinformation about um rumors with regard to particular uh products um misinformation in in data about safety of highway bridges and so on right and i you know i just um stumbled upon a a article about uh i don't know if you have heard of this uh you know dasani this uh bottled water yeah from coca-cola they donated a large number of bottles to waco texas and you all probably have heard of the audio we went through last week so yeah so this is uh this is a you know good act right you know with good intention but uh surprisingly it was not received well and um it became even a public relation uh crisis so this is the example that can show and part of it i think has something to do with misinformation people you know claiming you know strange uh strange flavor of the water and so on yeah yeah and so really this impact every business sector in every corner of our uh life another example i would like to refer to is um you know gamestop you you may have heard of the strange uh stock price increase by maybe 10 times just in this year yeah of gamestop because of uh investors or self-organized themselves through social media and to act against uh wall street and so that and you know i i heard that there's misinformation there spreading in the reddit uh channel of uh how do they call themselves wall street wall street bats that's the that's the reddit forum where where the uh invest investors are organized yeah um yeah so i don't know if this uh directly correspond to the question you ask at the beginning and you know if you could remind me or you know i i would be happy to go uh further well i think the students want to i think what we're hearing is that well how can um one what can businesses do and two as i'm thinking about my post academia experience what can i do to ready myself or what area of business might i consider what are the openings for um for for the next generation of scholars sort of going into considering work in the in the private sector how can they help to find the types of solutions that you're working on and two what are businesses doing about these challenges um you know what what has been the response to businesses are they reaching out to collaborate with you um what what what are some of the hypothetical solutions here to to the problems that we're seeing and the impact it's having on markets for example for businesses right so with regard to my own project we have been collaborating with duke reporters lab through then we get connected to the fact-checking community so the impact there so far is still largely uh on on journalistic organizations in the sense that we have tools that can help fact checkers in in doing their work and mostly their work focus on political fact-checking uh it's about wedding whether something said by a politician is true or false right so it's less directly related to um you know private uh businesses and so on but it does have uh it it's it's indeed highly related to some of the well-known um examples of misinformation that impact our life you know one is this a vaccination related misinformation and you can extend that to covet related misinformation as well so you know for these we're just working with uh the fact-checkers uh at this moment not really with uh private sectors yes yeah but the the point is that misinformation is a very broad concept it goes beyond political fact-checking there are all types of misinformation that is there and in that video we have um provided an example where a cyber security business will need a technologies in tackling uh misinformation right very good um i would like to open up the uh the panel to questions from our um our viewers online watching and joining us on youtube but also participants on zoom you may type your question in the chat box and also um you can also unmute yourself and and reveal your video and then we could um take your questions one by one from professor lee a question here good evening my question would be how has the system been back tested or vetted so that we know that the issues questions to fact check are indeed the most relevant ones professor lee yeah i was actually hoping to include uh in this presentation uh a chart that shows the correlation between claim buster scores and the um the um the factual claims indeed vetted by professional fact-checkers so basically we looked at you know things that the fact-checkers decided to work on and the scores received by these statements and we find a strong correlation i'll show that to you okay this is this is in a paper research paper that we recently submitted so just bear with me for a moment sorry about that yeah so you know the way we make sure that the tool is producing sensible and effective results is by collecting feedback um from our partners at duke reporter's lab and from then the fact checker community so that that's one approach and the other approach is what i said we we conducted the investigation to see correlation between our tool and what fact checkers decided to work on and then of course we are seeing more and more people started to um using uh our api you know they they sign up to get a access token in order to use our api so we are getting more and more people uh requesting that token and some of them even contributed to our code base so so now i have the paper let me show that chart to you yeah so this is a paper that we recently submitted let me enlarge it yeah so if if you see this chart and the distribution here to the left hand side you know the blue one you know this is the distribution of scores that claim bastard gave to those sentences that are not chosen by fact checkers to fact check right so we are looking at let's say all the sentences from a um a speech by uh by a politician that's a state of the union speech by the president so so claim buster gave each sentence in that speech a school and some of those sentences were fact checked by uh a fact-checker some were not so the blue distribution uh the distribution of claim buster scores on those sentences not fact-checked and then these other three distributions in orange green and um and uh well what's the name brown color so these are the distributions uh of claim buster's course on sentences politifact washington post and new york times so you can clearly see you know the separation between these two distributions that shows that clan buster is able to separate sentences that are worth checking from those that are not worth checking so keep in mind that not everything in this blue distribution is not worth checking it could be that they should be fact checked but you know fact checker they do not have resources to factor everything yeah thanks for your response professor lee and thank you for your question eric we have another question coming in from liz and this question how does your liz johnson wants to know how does your fact checker differ from twitter's fact checker that was introduced during the 2020 presidential campaign so um so twitter of course they work with the international fact-checking network and other organizations and they largely uh rely on you know human monitors to decide on whether um a account is spreading harmful and unchooseful information uh of course they may have pro you know proprietary tools or algorithms um that we are not aware or you know for you know the details are not made public so they could have tools similar to ours to flag let's say tweets that have factual claims or even factual claims that could be controversial yeah so that's an area where potentially you know organization such as twitter can apply a tool like a claim buster so in fact if you remember we i showed a screenshot of our own internal project called claim portal and the goal of claim portal is to use claim buster to monitor and highlight factual claims made by politicians into it thank you also any plans to go further to directly match the statements to machine learning driven systems which process them and present a conclusion as to their veracity that's a that's very that's a very good question uh so we actually had some uh um preliminary uh effort in this direction overall this is a much harder problem than flagging you know important factual claims so wetting the truthfulness of a claim is much harder than spotting an important factual claim so you know there are several directions one can take so you know you can think about data that could be available you know from various resources including the knowledge graphs i mentioned you know things like uh say information from imdb from wikipedia from data.gov all these census you know economics related statistics you have all those data you also have uh non-numeric data uh such as you know uh statements made by people or comments made by people and you know when you are looking at a factual claim you can figure out um you know try to understand that claim and then figure out how you decompose it and then leverage these various data sources yeah to understand whether it's true or false and of course this this is much harder uh we had a preliminary tool that can you know handle basic uh factual claims you know if someone says uh the the capital city of texas is dallas and that's relatively easy to figure out that's not true right and but you know if it is something like the u.s navy is the weakest since 1912 and as i explained at the beginning and even if the data checks out you will you may make the wrong conclusion oh you know you will make the conclusion that is different from what the fact checkers will say that they may say it actually depends on fire that requires a subtle understanding and it's very complex thank you we have three questions coming in on social media i'm also keeping an eye on the time first question for you doc professor lee do you think businesses will continue to adopt ai at a rapid pace what barriers might there be to adopting ai yeah i think that's very good question i think there's no doubt that businesses will continue to adopt ai and you know these uh big corporates you know like i mentioned they really have large amount of data large number of partners and customers and they can leverage ai in looking into the data for many many different purposes and i'm pretty sure you have encountered the stories about this right you know amazon using um recommendation systems to figure out what customers may like to buy you know next foot netflix can recommend videos to you you know um even you know the the pandemic creates opportunities you know you you may have you may heard of atnt google working on you know uh tracking uh people's movement uh to stop the spreading of virus and so on um yeah so so there's uh you know to me there's no doubt that there will be still um you know large amount of investment and development in in this arena and and oftentimes it's not a choice even if you don't do anything you you need to be prepared for challenges related to this you know the examples i gave related to gamestop and sunny right so for instance a company they may need to have a capacity to monitor social media to understand the public opinions about their products and their brand yeah you know whether it's positive or negative and whether there is misinformation rumors spreading about their brand online and if they can you know uh spot that earlier they can take actions earlier to to to counter uh act sure yeah i think we certainly talked a little bit about that i think you know the impetus is on companies to sort of stay ahead of the social media um posts right and to to be proactive in how they engage with um um what's happening in the in the social media space positive and negative um that's that's a good point what are some of the current trends in ai that you are really excited about that's a question coming in from social media yeah uh well i think all these questions are great and this reminds me you know the the last question there there was also the second component about obstacles or limitations right but let me answer both uh together maybe about you know the pitfalls or challenges first and you know most i i believe many of you have heard of uh you know pitfalls related to buyers and so on um and you know one example i can use related to again my own research projects you know if you think about what to recommend to the fact checkers um there could be buyers if you are not careful in building our data model uh building our machine learning model the buyers can you know come from various different sources including uh the buyers of the human annotators when they annotate a sentence in deciding what's important to factor it's also related to you know you know the conversations uh the political discourse that people have online you know what's what appears more often what appears less frequently so we have some preliminary experiments that shows you know a claim spotting model if not carefully designed it could be vulnerable to [Music] buyers as well one example is we have two factual claims which are otherwise identical they only differ by reference to two different you know ethnic groups you know one may say say the black community the other may say hispanic community and they are rather uh identical the rest of the two sentences and the model could potentially give one sentence high score and the other sentence low score and suggest a very different uh worthiness of checking and that may you know if you totally rely on a tool for deciding on what to fact-check then you may kind of focus on fact-checking things particularly related to one community by ignoring ignoring uh the other community yeah yeah good good point we have one final question i think we have time for one more question what are some things people transitioning from jobs in academia to private industry should know what in particular should phds and mas and postdocs in ai data science know thinking ahead on the future global economy what are your suggestions what are your predictions what are what are you encouraging our students to do so you know these uh these are all fascinating questions and very big ones you know i'm not a a you know any authority so let me just provide my uh opinion so data science by nature is an interdisciplinary discipline so it's you know it's supposed to be that you know everyone you know regardless of your background you need to have various level of data they call it the data acumen uh you know the you know the capability of you know wrestling with data analyzing data you know gaining insights from data and you know making decisions based on that um being able to recognize the potential pitfalls of those decisions you know such as bias and being able to even interpret you know machine learning and ai algorithm results being able to communicate you know in data science it's also important uh that you have a way of describing the results in a story in the context keep in mind if you're a data worker you need to communicate data science based decisions to decision makers to your boss and and that communication is very important as well so so you know my sense is that this is really an important skill that everyone should have you know if you're not let's say if you're not a computer science student you know you you you don't necessarily need to be a machine learning guru know the mathematics behind it um but you need to know the basics you need to know what tools are available there and you know nowadays there are a lot of open source tools tutorials online and they are made really accessible to everyone with you know basic uh basic uh college level education and you you can you can build a very useful tools quickly really uh by following some of these uh tutorials online so data science or you know ai i think is it's exciting and it's accessible to to everyone and it's something that cannot be dodged by anyone thank you very much on behalf of the program for economic research and the department of economics at columbia university thank you professor lee for joining us this week the conversation will continue online this video will be posted um um in the next day or so you can follow our keynote speaker professor lee on twitter at 10k underscore lee you can also follow columbia economics and social media at columbia underscore econ thank you so much everyone for watching thank you for attending we encourage you to visit our website www.econ.columbia.edu

for more information about our upcoming public events thank you professor lee thank you very much bye bye bye take care take care

2021-02-27 16:18

Show Video

Other news