LogicLooM 4.0 | Guest speaker and orientation session - 1

LogicLooM 4.0 | Guest speaker and orientation session - 1

Show Video

Yes guys, we are live. Uh let's you can proceed. Good evening everyone. We are super excited to kick off Logic the finale edition. After three fantastic months, we are back with one last powerp packed version and we have got some amazing things lined up for you. To set the tone, we are starting with a session that couldn't be more relevant. The

impact of large language models on edex. At a time when tools like chat gy and deepseek are reshaping how we learn, teach and interact with knowledge. It is more important than ever to understand the opportunities and challenges these technologies bring. Who better to guide us through this than someone who has dedicated his career to the intersection of technology and education. Dr. Jay Kishman or as many of us know him JK sir. He's a senior scientist at NPEN and a passionate researcher working on how technology can improve education especially in teacher training. An IIT

Bombay Alen he has been a true pioneer in using tech to transform learning. With that it's my great privilege to invite Dr. Jay Krishna to take over the session. JK sir a very warm welcome and over to you sir. Yeah thanks uh thanks uh everyone uh I know many of your names uh through discourse and other uh I mean what do you say the different emails that you guys send us. So it's it's nice to see a lot of you over here and anyone who is uh uh uh live on uh YouTube u uh thanks thanks for joining in. Uh let me be uh

let me take an anticipatory bail over here. Uh just like you guys uh I'm also uh I mean weeding through LLMs uh at the moment. uh I'm not an expert uh sorry Lakmi I'm not an expert in LLM but yeah something that I think uh is a strength is to understand whether LLM is an ideal tool to be used for a specific context uh specifically with respect to educational technology that is something that at least I am able to uh at least I'm better than many uh many others is what I want myself to believe uh so So that's that's the credential that I would uh uh start with uh and uh at the moment so uh uh so overall if you look at it I mean uh I do not want the session to be a uh what do you say I uh I giving you an interact uh I giving you a gan on LLM in net and other things. So since all of you are uh BS students uh have been involved in uh I mean have been learning from our courses. So here is a quick question to all of you. Uh what all LLM tools are you using? Just pour it down over here uh in the chat. Uh let me take

a look at what's coming in and and uh so this is something called uh so there is a strategy called fastest fingers first. So, whoever is putting it over here, I'll give a shout out to the fastest five fingers uh uh where the uh technologies are different. Okay. And if somebody can give me a

co-host permission for this, then I can put it up as polls uh and we can see good visualizations come out. Uh okay. So the first shout out to Yashrastogi who saying yeah sir it's like given can you check yeah yeah it is there it is there thanks thanks so Yashtoi who is saying perplexity uh hush dangar Singh Mahara I hope I pronounced it correctly uh he has he has not only suggested perplexity you are talking of Chad GPT Claude and Germany uh Susmita Aayushkumar uh have mentioned the perplexity chart GPD uh Rahu Asandra for deepse so I'll put deepse claude geminy etc as standard uh LLM technology so for me everything comes under the same thing so perplexity uh Google AI search all these are another class of product so that's that's the kind of categories that I will put uh put in so au Radhu, Ravina. Okay, five people

already. Uh, Kunal for notebook LM. Notebook LM is very different from the standard uh uh LLM tools uh that like chat GPT or even different from perplexity also because of a very different functionality that it has. So there is one notebook uh lm I hope Shivamu was talking of Jiminy not a Jiminy uh who has talked about okay HMA has talked about Grock Grock as I mean so Grock I will put still under charg but this is a new technology that has come quen also is almost similar co-pilot Rahul Chakraarti has talked about co-pilot it is very different from the standard tools. It is uh coming in notion AI uh by Aush uh I will just give it because it is built on top of notion. So uh slightly different uh use uh Subramanian moan has talked about MCP uh but he has not MCP is the protocol okay not the exact uh user thing. Shukit

Babu has talked about cursor uh cloud AI I haven't seen. So just trying to get a understanding of the different type of uh things uh that you are using. So based on all the responses that you are see I'm also learning something new here heard of motif AI it is interface design okay so I'll I'll put it as LLM Uh so this is a different use case altogether right motifi 2.0 Uh I'm surprised that nobody's talking about Gibli uh or any of those uh uh image generators per se. I think all of you are using CHPD and others. So I mean from whatever I'm seeing over here uh if you guys are using these uh particular technologies the first classification that you can see is one you are directly interacting to the LLMs through the chat interfaces uh that they provide or the uh chat powered or uh the functionality is uh uh chat powered uh so to speak uh that is where all the major players are and that's where majority of you are. I mean if you look at the chats

at least uh 70%age of you have mentioned uh whoever has responded over here have talked about chats uh chat powered uh thing uh you can continue putting in new things because this is also uh knowledge for me as well. So uh the second specific thing are very customized uh things like notebook LM. So what is difference between a standard LLM powered I mean a chat powered LLM and notebook LM is it is customized the knowledge source is limited and the same techniques right I mean all the different aspects uh summarization uh deep dives different formatting even a what do you call a podcast right a podcast based on the content right so a reinterpretation So uh the first one is an I mean you will use it primarily for content extension right uh the second part of it is uh so first one you will do for doubt clarification uh I mean it is it is on the larger uh knowledge source that is available whereas in the second one notebook LM uh you are actually doing on a fixed resource so that difference is there uh and then you have very specified things like um copilot etc which is for a specific purpose over there copilot is primarily uh the github copilot is primarily coding uh as a coding support tool uh and then there was somebody who was talking of cursor cursor is a uh I mean it is primarily for coding purposes right so it's a very specific thing but it does the same thing uh I mean It generates you give the prompts you give the comments it automatically generates it does I mean a lot more power has come uh in all these uh tools right so LLMs if you look at it this is the broad basis so I'm I've stopped the fastest finger first thing now uh let us come back to the major uh thing so one of the uh so if you look at the broad usage of lln As of now uh there is uh usage where it is very customized uh very specific source to the open source uh more chat and additional functionalities. So there is there is some spectrum that you can see over here. Uh and a lot more is also going to get generated in the coming years. Right? So

uh one of the major things when LLM and when you talk of educational technology uh impact of large language model uh one of the major impact that you can see is you don't need uh you don't need an expert uh you don't need I mean so for example uh I'm I'm talking of my college days uh if I wanted to understand something and this Uh okay, at least internet was there. Uh so basic internet but it was like uh you put a search query in Google, you go out, have a coffee, come back, that's when the results will come. Okay, so that's that's that's my college days and uh yeah, I'm not too old also. Uh just

FYI. uh but uh yeah I mean the speed had I mean that information explosion I mean we were post 2000 we were the post 2000 collegial uh collegiates uh so to speak uh but uh there was uh the major problem was the information was there but we had to go and do the heavy lifting of uh compressing it uh then synthesizing it and all those things. So if you take uh another 20 years back I mean at least my props when they were looking at uh when they were doing their colleges in fact uh my prof had to literally uh go to another institute's library for specific uh uh what do you call suppose they wanted to know something about compilers a specific C compiler or a forran compiler they had to go all the way to TFR and sit in the uh get into the library during the regular hours take a book which is available. So see if you look at the last 40 year kind of thing this availability of information and also the nature of information that comes into our fingertip has changed. uh with LLM what has happened is the quality of information also there is a a massaging on the information the synthesis is being done by the LLM and it is giving it to you. So see this is

the biggest change uh that you can see in this entire uh uh because of the explosion of LLM in the tech arena and this opens up a lot of different use cases a lot of different uh what do you say innovative ideas are emerging I hope some of you would have seen the uh tools in data science course the last term right I mean heavy usage of LLM even for evaluations So that I mean so the possibilities that are emerging are pretty high. uh but okay so uh I am a pro uh tech uh I'm like that optimist tech optimist who believe that every technology can be used uh for the benefit every technology has that particular power to make the difference right uh so uh if you look at LLM as a technology the greatest power that it has is it has that ability to crunch the information uh then and provide you with very specific inputs and this is both in if you're looking at big text or even code it can generate code and give it you because there is a logic to it. So you provide a language prompt is able to understand what it is and provide a logically uh almost logically correct code segment uh also to me right so that's that's the power now uh yeah so I think Sharan is asking a valid question so the uh the session is primarily to look at what is the impact of LLM in a tech arena and how has it changed uh the entire educational techn technology as we speak. So, uh I would broadly put three major aspects uh over here. The first major

aspect where LLM has influenced uh edtech is the uh latent ability. So, what is the ability of the student? Uh that has been put into a big question mark. So uh here is a question for all of you. I mean you can use any uh LLM

and this this is going to be an activity. Okay. Uh the thing that you want uh that I'm looking for is all of you have access to uh I mean just do a thumbs up if you have an access to a LLM whatever you mentioned over here if you have access to it right now. Okay. Okay. Let's play a fastest finger first game. Uh again, okay. Uh so what I

want you to do is look at three broad uh so what I want you to find out from either web or LLM or whatever tool you use three major uh teaching areas. uh what I mentioned is three major teaching areas where LLMs are totally transforming education. So what I what what I uh I'll keep the definition of transforming in quotes. Uh it is up to you. This is the challenge. Uh so fastest finger first you have uh 3 minutes. The question is

identify three domains, three domains, three educational domains where LLMs are transforming education. So the key point is LLMs are transforming education. Okay. Kunal Chhaturvedi is the first uh fastest finger. No.

Uh, okay. Ravina is asking me or Rabina is asking uh everyone. Nal Chhaturvei is talking about personalized learning and tutoring, content creation and curriculum development, assessment and feedback. Uh the panchu chawan has

talked about computer science. So smith fatar gi personalized. So you can see coding computer vision analytics by superman. Okay Raina I understood. I think you prompted it at the wrong place. Primary and secondary education workforce and vocational training. Okay. So I'll I'll just do the So if you just go through the chats, okay, just go through the chat message. Uh are you able to sense the similar uh tools that you guys are using? So the point is uh the first major thing uh you have to note and this is a meta point. Most of

you are coming up with almost similar answers right almost similar answers. I mean yes each of you are doing different things. See one of the major impact in terms of uh in edtech especially of LLM is uh there is at least uh all of you should be knowing parto's rule by now but 80%age of the assessments evaluations etc that we get uh are almost similar and this is one of the biggest impact that LLM has had uh on uh edtech uh specifically so I'm not talking from uh a technology perspective. I'm talking now as an

educator. So look at any assessment. I mean earlier also uh the graded assessments uh the programming assignments etc were open uh even in your uh uh graded assessments right I mean everybody had access to some kind of solved solution. So uh even even though it is similar it is uh at least there was a thing that okay there is a repository at some place I will get hold of the repository and I'll copy the answer. Now what has happened is okay I have some LLM tool available with me I will use that to get the answer. Uh so what has happened is a lot of uh from an education perspective uh even though from an edtech perspective this is a useful technology uh intervention or a technology feature that has come up. uh

from the other side of it what has happened is a lot of people uh have stopped doing that fundamental process of doing the synthesis and especially at a learning uh uh place. So this is one of the biggest impact that uh LLM has happened. So the point is everyone is thinking that the answer is the final answer is what the whole point of education is right at least uh that's what gives you marks right I mean there's no kidding over there yes the final answer actually helps uh to get you marks in uh uh any kind of situation right so the the point is uh uh the point over here is uh one of the major uh aspects of learning which is putting in that effort and doing going through that process that has diminished. Uh there are uh a lot of

research papers that talk about the advantage. I mean and this is this is also another bias that uh science and technology has. uh if there is a incorrect uh I mean a result that is counter to the general trend it will not be surfacing in the general uh I mean what do you say the uh no journals are going to publish it uh see in fact u how many of you know that uh uh so how many of you know what is the pass percentage for pythons uh in uh in BS program what What is the rough percentage of pass? 45. 40. Nice. 24. Hey, it's not that bad, [Laughter] man. Pass percentage. Uh, and specifically, okay, let's let's be very specific. OP pass percentages. How many

of you know the OP pass percentage of Python in the BS program? Okay, I'm hoping that all these answers are not coming from chat GPT or any other LLM. Yeah. So, uh I'll give you uh so roughly it is around 45 to 50%. Okay. uh now it is around uh uh see these are simple questions. So one of the first I mean two years back when we did uh so we ourselves also do a lot of these research work uh uh behind the scenes I mean we take a look at what is happening how does it uh work etc. uh so uh roughly 50%age of 45 to 50%age of people pass OPs uh in Python and what we had actually initially looked at when it was so this is GPT3 three not not even 3.5 when it was GPT3 we had done a study to see whether the uh code that is given by the students is there a way that GPT can give a good feedback uh back to the student. Okay. So we had looked at

around thousand plus uh students submissions across uh six quest four questions four to five questions uh and uh we had done some kind of analysis of that and it was published it was a published paper okay it came in uh one of the uh premier CS conference uh it's called ITICSC if you go to research uh in that I mean there was I mean we had a uh research I mean some of your students were also part of that group some TAs were part of that research group and we had done this initial analysis but you know something I mean something stood out in that entire thing so when we were using LLM uh nearly 30%age of the time LLM was not giving a correct feedback uh 90%age of the time it was able to say whether the code was correct or not but 30%age of the time I 30% of the time it was giving an incorrect feedback. Now this was something I mean see this is concerning right I mean if a student goes to GPT3 puts their code I mean puts up the question and write their code the feedback that is coming to them it's uh 30%age of the time it is going to be incorrect. Of course all of you can say GPT3 this is like years and uh this thing away. So uh one of the major aspect so if you look at LLM uh in this particular educational context uh one of the major problem that LLMs had at that point of time was the how the trust how much can we trust an LLM right and this is still a problem I mean whatever you do I mean yes perplexity a lot of them have citations and other things. But what is

the guarantee that they have not referred to a incorrect uh uh research paper? What is the guarantee that they are not uh they see it only synthesizes right? It synthesizes based on it is something prob probabilistic right. So another impact of LLM on edtech uh over here has been the trust part of it. How much can you trust the LLM right or how much can you trust the feedback and you know the surprising thing the surprising thing was we followed up this particular thing say okay this is six uh uh I mean four problems from thousand students too much of work. So what we did was in the next round so by that time GPT 3.5 came so what we did was we gave this buggy code and we gave it to TAS who are Python and other course TAS and we also gave it to a GP all of them gave feedback so we asked everyone to give feedback uh we sanitized the feedback and what came out uh this is also another paper I mean this was a multi institutional collaboration. So we

had Pras Viraj from IAC who was the main collaborator over here. Uh Pras who is taking uh software engineering. Uh then uh Rishab who was so Rishab Balce was the uh person who was driving this at uh ID. Okay. So uh what happened was so I think there were five TAS and GPT. Okay. Then what we gave was all of this feedback we completely anonymized it and we gave it to the TAS. So a TA will see four other TAS feedback and uh uh GPT's feedback GPT 3.5 turbo.

Okay. Uh interestingly interestingly GPT overall if you look at it GPT came second in terms of quality of feedback I mean that is how others rated the feedback but the uh major point was though GPT's uh major feedbacks were uh pointing to the correct errors in between it was hallucinating that it was inventing um errors which are not there or it is pointing to a completely different error. uh all these papers are I believe so this was published in I think compute uh another CS conference uh which was there and uh there is a uh I'll try to share those materials also uh after the uh the session but over here uh what you have to now look at over here is students uh I mean so all these are TAs right TAS believe that GPT's feedback is really good or is among the top in the top two. So the earlier point of trust which

we said I mean there is a problem of trust. Now over here even with errors students are believing the LLM feedback. I mean because it is very well written.

It is very well polished and given and there are multiple papers also over here. So this so if you look at the impact of LLM on a tech this is the third major uh uh part of it the trust has a another uh side of it the believability of LLM feedback was very high okay so there are three points one uh of course uh uh the thinking part of it where it was getting less so everyone was uh not analyzing ing the information too deeply. The second part of it is the trust part of it where how much you can believe the LLM. The third part of it is uh given an LLM feedback majority of students believed that that feedback I mean it was structured very nicely uh uh to give the illusion that it is a perfect feedback. So again it is a different aspect uh but not exactly the uh the actual accuracy part of the trust. It is about the believability part of the trust. So a lot of people

are now believing what LLM is. So this is so these three I I would say are the bigger impacts of LLM in a tech uh uh if you ask me. Now my question to all of you uh I mean all of you have done the fastest fingers uh can you ask lll okay so do a reflection to the llm given a feedback uh okay let me uh how do uh how do I put it given a valid uh given a feedback coming from LLM how do you validate whether that feedback is accur Can you ask LLM? You I don't need your opinion. I want all of you have access to uh uh LLM's. I want all of you to put this particular prompt into LLM. I have asked a question and LLM has given me an answer or a feedback. Right? You can

frame it in whichever way you want. It could be a question and answer. How can you trust or how can you ensure that the LLM's feedback is accurate? How do you do that? Uh ask LLM. You don't have to tell the answer. I know uh all of you are thinking but ask LLM. Here I want you to specifically ask

LLM this. Okay. Am Kwal is the first person. Uh sorry, Ammon had asked that question. A FIFA tazer. Cross check with trusted sources. Ensure logical consistency with

known concept. Test it through application. Confirm with a human expert. Ask LLM to explain itself for clarity. Okay. Uh AU Shadha uh he's talking of relevance. Is the feedback directly

addressing the content or question at hand? Accuracy. Cross-check the facts or advice given. LMS may hallucinate. Clarity. Is the feedback clearly written and understandable to verify? So this is what so there are three points that Aush has given Krishna has given cross check facts test it run code or recalculate check logic for consistency ask experts watch for hallucinations. Rahul Anur Goautam had started typing something but uh he stopped. Rahul Chakraati look for

citations use factchecking tools peer review and collaboration monitor for bias and errors okay the fifth person susa trust answers by crossverifying them with credible sources apply critical thinking and testing them in real world or academic context now here is my large question to you I mean in many of these you see something common right uh cross check facts ask expert or do I mean is it foundational. So there is arguments like this. If you have to do that with an LLM output, then what are you doing with uh the tool? Why is the tool there? So many of these I mean if you look at it many of the suggestions that you are uh that LLM has given back is very tedious to do right and this is where the uh the opportunity uh lies within the tech space.

uh one of the major advantage of LLM is it is able to crunch the information and it is able to give you something in a syn uh synthesize something in a particular format and the way you adjust your prompt the quality of the output coming from LLM would also be uh almost at par with the quality of the input prompt. Right now the question over here is how will you uh how will you make sure that your prompts are good? How will you make sure that the information uh resources that are available uh to LLM to go through the question is uh trustable, verifiable, there is a unique set of thing, right? So uh I mean uh the uh thing is okay something that I personally so this is more of a personal uh thought rather than what research says and uh other things I mean I've been using LLM now for almost five months five to six months I've been trying to learn go with uh with claude uh many a times what I see is uh the boilerplate part of I mean a boilerplate code uh when it generates that's it's pretty good that means to give you a better outline right so if you want to argue for something giving you a template for the argument that is where LLM will be really good so it will give you it will be at a broad level it will not be specific but it will give you precise points which can be good starting points uh another powerful thing about LLM is it can give you a lot of helpful suggestions, counter questions uh so to speak. If you ask for give me some counter questions through which I can validate an information, right? I think uh that is also another place where uh you have uh you have a lot more of uh possibilities when it comes to uh a tech uh if you uh ask me. And uh the uh the other space uh I think this is where uh a lot of people miss out. There is

something called uh reflective learning. Okay. So or it's called reflection. What is reflection? Rather than running away with ideas and other things you take you take a moment to pause. Okay? You take a moment to pause. Ask more deeper questions. ask more deeper questions on the content.

The moment you are able to uh the tech tool is able to give this particular support to the learner that is going to be a gamecher if you ask and uh uh I I'll just give you some uh specifics of uh uh what people have used LLM for in educational space. You have already seen lesson planning. you have already seen content. So in PDS they are doing it end to end. I mean uh uh there are others also I mean you give a video it summarizes the video really nicely.

Uh flash cards could be made. I mean these are all tools where LLMs are currently being used. But what is the purpose for this uh this particular thing? I mean how does it help in education? Right? So that's the deeper question that you have to ask whenever you get all these features when you get all these uh different uh aspects uh coming up from edtech technologies. So today notebook LM or take any technology that you are using right now the question that you have to ask yourself is you ask a question it gave a feedback. The next question that you

should be thinking about is while going through the answers, how does it improve my knowledge level, my thinking, my understanding of the uh concept, right? Either understanding of the concept or my entire understanding of how I learn. A useful activity that you can do is take a I mean all of you will have a lot of chats that you do uh uh a lot of chat that you do with chat GPT and others uh right take a dump of all those chat can you see a pattern do you stop I mean are you a person who will only ask a single question get the answer and stop it what is the number of conversations that you are having deep is the conversation, right? Have you thought about that? Has anyone has anyone thought about how you are interfacing with LLM? What all steps are you doing? Uh maybe it may be for facteing, it may be for uh uh it may be for validating the information, it may be for something uh very different. But how many of you are actually doing this uh consciously? This is an opportunity because this if you ask me none of the edtech companies have looked into this specific aspects. I mean it is not about me offloading the thinking to the machine. I mean whatever

is the uh drudgery part of the thinking that's what I should offload. I mean synthesize 10 documents that's what I am going to do. But what should I synthesize? How should I uh look into the document? What kind of results am I looking at? This should be there in the back of my mind and it should be part of your question uh to the uh LLM. Right? This is why I like notebook LM. It is a

very good I mean a limited resource a fantastic tool for learning and research whatever you want. I mean it's defined I mean it is a the scope is defined you can ask more deeper question you have templates study guide briefing notes uh but you can involve more you can ask more questions and convert some of it into an artifact right so this is how learning happens if you actually look at how you learned maybe five to six years back I mean you will go okay you will try to solve something you will try to answer something suddenly you will find another maybe a stack overflow where it is explained differently. You will collate a lot of information. Suddenly

you will see some kind of counter example to the one which you uh are actually doing. Right? So that counter example will force you. Okay. What was fundamentally wrong in my original argument, original solution? All these see these are all powerful educational tools. So Socratic questioning uh

counter examples um uh there is something called um uh estimation uh this is from I mean uh this is like a ballpark figure you only do I mean uh how many uh how many hairs do you have? You look at a person with a lot of hair ask how many hair do you have? How do you solve this question? So here is a open question to you. uh and this is uh this is coming from one of the PhD thesis of one of my colleague who was talking about estimation skills. How do you cultivate estimation skills? Uh can the human heart be used for opening a Coca-Cola Coca-Cola uh bottle? Can a device be made which uses input from human heart transfers I mean transforms that energy and uh uh put it use it to open a coke bottle open question to all of you okay you can ask GPT you can ask uh I'm not debating whether whe the question is valid uh whether it is actually possible or not I'm just giving you a fictitious in how will you do this so uh every of I mean okay I'm hoping most of you love cold drinks uh especially airated drinks uh I'm taking the example of a coke uh bottle glass bottle bottle. Okay, all of you know that it is I mean it's carbonated and it is pressurized and there is a uh a cap on it, right? You need a soda opener uh to open it. Now my question is

can the human heart be powered to uh to open a Coke bottle? I know a lot of you have put over here. I put I I'm not going to uh so all the people who have posted uh uh Priti uh so Priti is asking a question. Shiva Shan Usmani also has asked a question. Uh Afifa you have again uh given it to charge GPD good. Uh Afifa Stephen Telis uh the panchu good. Uh I know you guys are using fastest finger first but here is what I want you to further I mean the meta part of my uh thing. Do you believe

that the answer that Chad GPD gave is reasonable? Think about it. How are you asking the questions? Are you asking it correctly? Uh and more importantly, like I said, this particular question was uh a thought question, thought experiment uh that was given to understand estimation skills of the student. So the point was not about the final answer. The point was about our students learning the ability of estimation. So uh a lot of you are looking at heart as a motor. uh somebody as uh most of the answers are looking at heart as a motor uh which pumps blood everywhere, right? Are there other ways to think about it? Somebody is uh refuting that it it cannot be used Okay. So, I'm not worried about the

final answer. This is what I'm trying to say. I'm not looking at the final answer over here. What I'm looking at is are you guys thinking? And this is where LLMs in edtech I think this is a missed opportunity for a lot of tech uh solution uh solution providers is it forcing you to think? So is the thinking part being offload? If the thinking part any LLM tool, any edtech tool that is powered by AI, if it offloads the thinking process from you to the machine uh to the uh system or whatever whatever you want to call uh AI tool that is not effective educational technology.

Think of a scenario. I mean how can you still make thinking still be part of the student but any other uh mechanical operation, computing operations etc that is required how do you put it? How can the machine help you? So this is uh so this is called I mean synergy right human AI synergy and these are the buzzing words that are going to come in the days uh to come. So for a true ed tech solution you have to look at I mean see it has to it has to the AI has to empower the human and that empowering happens where when it the biggest skill set of a human is their ability to uh think uh to discriminate uh etc. How does it improve that

uh skill? I think that's the uh that's the core message that you would want to look for uh in edtech solutions. Uh even when you are developing an edtech solution uh think of this very deeply. How is your solution uh empowering the user? And by empowering I mean practically it has to give some value addition to the user rather than just I mean it should not be like okay it did a complicated math and or it did a complicated code uh uh in seconds but how does it help him in that entire process right I think that's the point that the edtech tools have to work on and uh yeah any of you have interesting ideas around this I think tech companies would be just catching you uh if there are enough people who can put in a product which forces uh students to think I know when you are a student you don't want to think about a problem or anything else but if there are solutions that would uh force students to think I think that would be a very powerful lette tool and that would be one of the greatest impact that an LLM power tool uh edtech tool can give back to education. So uh I know this might not

be the kind of talk that many of you uh expected but uh couple of things that I want you to really look at is look at all the kind of messages that are coming in the chat. Uh it is I think people I'm I'm trying to see whether the same person is adding more uh uh refined answers. Uh I know the Coca-Cola question was a little bit of a provocative one. Uh the point

is has this for has this session. So it's not about uh so if you look at even uh a session's value uh any session I mean a teaching learning session I think the value that you have to take as a student has it given substantial uh difference in the way that you are thinking about learning right that's the power that a session should have that's the power that a LLM technology should give and that's the power that a good teacher gives I mean why are teachers good teachers not uh replaceable I mean uh teachers are replaceable but good teachers aren't right okay how many of you I mean I I'll take examples from our own how many of you would uh line up for a session by Karthik uh on either math or MLT or any of the uh courses that we How many of you would line up to listen to Andrew talk about uh maybe uh join [Laughter] probability? How many of you would actually uh line up to listen to Sudashan explain debugging? See, see that's what I mean. There is a reason why human is there in the loop. Okay. So what everybody should realize is that human is important. It is not about replacing human and ll or AI or anything. I think uh the three major messages uh that I gave trust, believability and uh that thinking. If

you are able to visualize these three as pillars and compare your edtech solution with teachers or other humans uh subject matter experts so to speak, right? Uh so it would be hey by the way no political discussions in this uh thing. Uh I understand it is meant in joke but uh absolutely please uh refrain from any uh yeah any political discussions uh in chat uh it would be yeah it is I mean this is a I mean good thing right we are talking of LLMs we talking of educational technology let us focus on that Okay. Okay. Any questions? I know uh uh I mean it was more of a I mean it was more of it was very interesting and it was very intriguing as well. Oh is it? Yeah it was something different than you know how usually like this like sessions are being like taken at paradox. Yeah, I think like Shushmita mam you can I think like speak any questions from anyone? I mean anybody have questions please? Yeah, please like feel free. Anyone having any questions? Yeah, Brmana.

Um sir, uh thank you for the opportunity. So like I'm planning to I have a there's a course in the CA IC the charted accountancy course there are many courses and uh in which one of the subject has uh it's full uh the books PDF in the the website. So what I wanted to do was uh to take the text in the from the PDF and use it to build a chatbot which will help the students to I mean to I mean it it'll be a tool for assistance but uh I'm not able to think of much uh functionality of the chat. Uh is there any suggestion that we have? Yeah. So think about this now uh G I mean a lot of I mean lot of existing solutions are already doing it.

So what is the extra value that you can provide? I mean charted accountancy there are a lot of I mean even in your BS I mean think of uh the immense number of uh PDFs and other kind of websites that you have to look into to uh go into depth of a subject. So how can how can a bot help right? So the first thing is how easily is it able to synthesize that's the power of the LLM but I think the value if you are thinking of making your product as having a USB I think the value is how is it making the student I mean how is it helping the student to think uh will be uh the question that you have to think about and how can you introduce this functionality in the bot. So uh what I would suggest you to do is look at this particular technique and it's one of the oldest uh oldest in the book Socrative questioning. Uh how does your chatbot help in socive questioning? There are a lot of apps that already do it. Uh but if you can innovate uh on this uh that is going to be fantastic. a lot of uh uh so basically that's how Socrates taught his pupil right he will ask a question and when something comes back he will ask again on top of that right so how do you nudge your student to think more critically I think that's that's what the idea should so CA and all see uh I mean uh programs like CA there are certain factual informations there are certain process related information uh another uh thing to look at is uh Merrill's uh okay uh uh so there is something called Bloom's taxonomy where it talks of levels uh cognitive levels but Bloom's taxonomy also has another uh thing about the type of uh knowledge, fact, concept, uh uh process, procedure, uh and I think theory, I forget the fifth one.

But where is the uh where is the knowledge line? I mean, is it a factual knowledge? Is it a conceptual knowledge? Is it a process knowledge? Is it a procedural knowledge? Is it a uh a more open conceptual knowledge? Right. So you will have to differentiate these aspects and if you're able to build on top of that I think that would be uh that would be another useful input that comes to uh uh I mean you can think of the product in that uh terms as well. Hey I have a follow up on that answer. U see basically when he said PDF he wants information from the PDF technical mind of mine technical side of it immediately I was thinking about uh the retrieval augmented generation I was thinking of splitting it into different vectors stoing embedding and then so I was into that process and then I was thinking about uh vector graphs like u landsmith going into agent AI trying to split it and So see where do I connect your answer into this solution or is it there some other perspective to it? No no no no. So it is

at the first step. So the embedding that you do how are you embedding it? What kind of so uh so factual knowledge is I mean there is nothing to dispute it. So if you uh even in your vector if you have these components there and uh uh the question that you have to ask is is the the question is it about a process uh is it about a procedure procedure is a procedure and process is more or less similar maybe process is more straightforward there are complication procedure there are a lot of decision making to happen right fact uh otherwise I mean fact cannot change conceptual knowledge is different right? So what is the fundamental understanding? So if you are able to in your vectors if you are able to do something like this so there is something called uh yeah so uh Shukit uh looked at something called uh knowledge space theory. Okay this this has been

used in a lot of intelligent tutors. Okay. uh so its or intelligent tutor system is there from 1979 I guess 1975 1979 uh it's a rich body of literature uh that talks about various ways in which uh people could be taught and intelligently right the AI is the other side of it so uh how are concepts how are they leveled uh so a lot of it will be how is your graph how are to distributing the graph right if you can innovate over there that's where I think uh you will see a lot more of uh uh value a value value addition yeah I'm actually a teacher India fellow so I immediately connected with what you said about bloom taxonomy but my I I my question is like I'm learning techniques here but how do I bring that human factor into it. Obviously, yeah. So, let let me ask sorry for cutting you short. Uh but that human input a lot of it uh should be it is it cannot be I mean we are not an LLM that the moment you power power get powered up uh you start doing it. Our knowledge

comes from experience. I mean that experiences is what differentiate humans from machines right? How are we using that experience? So the I mean how do you discriminate the information coming from a machine and what are you giving to the student after so you are acting as a filter or a it could be a filter plus amplifier so to speak right. So how do you do that and how do you do that effectively? I think that is where the value of the human is there. Especially

if you are a TFI fellow. I think it is not about uh like I said it is not about uh the correct answer. I mean if you look at the situation of the students that you uh I mean the school where which has the TFI fellow a lot of it is about how do you be become more empathetic? How do you become look at chances? How do you see give opportunities for students right and that's where you have to think a little bit more and during this process learning becomes like a byproduct uh something that goes along I mean that is there consistently right so how do you build that kind of models I think that's where uh the human in the loop is more important and that's where how do you power I mean I mean how do you learn all these things I mean how can Can you uh so simple example any new in incident that you get in on the field can you ask those questions to how should I have responded over here? Have you ever asked that question to uh LLM? Okay. Well, there are a lot of theories related to empathetic behavior, right? So, what happens? Let's say you put some three four books on empathy on a notebook.

And then you ask very targeted question. I I have this question. What happens when the scenario changes? What so is it able to construct a meaningful response? Right. Yeah. Uh I'll complete that question. Yeah. you what made whatever you spoke

made sense but I am still a little uh skeptical because I'm thinking the AI piece is still in its puberty stage like it's still developing and all that right now we are in ANI artificial narrow intelligence next phase will be like artificial general intelligence but so I'm thinking what part of my teacher capability will it be taking up and how will I be a part of like a person who will be able to make an impact with that a generation So I'm very I'm looking very forward man but still I have had this question as a teacher. So that's why I'm asking you. So the answer to it is what uh Armstrong said uh some uh 60 years back hope 60 or 50 years back it's a small step for the human right take small steps okay the moment you take the small steps and those individual steps matter should each individual step it's not about the magnitude of the impact so there is something called I mean when you look at impact There is something called depth of the impact, right? So you superficially touching 30 students, right? Superficially impacting 30 students, but compare that against you deeply impacting one student. How do you how do you differentiate it? You asking me? Yeah. Yeah.

So I think I'll go for the second one. Why? Like because u it makes sense for both of us, right? I'm looking myself as just a partner in the 30 people also impacting 30 people also at a I mean uh let's say pushing something delta x over 30. So it is 30 delta x right? Uh see my point is if it is like a focused impact I think I am creating a one more person like me like he will go spread it across like that it will proliferate but mildly touching people I don't think I don't think that's the role of a teacher personally mildly touching is maybe like see there are there are metases you know when we talk about children there is echos like the first initial circle Maybe in that second or third circle methere we can say that a child could be there but as a teacher you are like very much personal to a child is what I believe because the kid respects you like that that's the first reason yeah but yeah so uh I mean agreed so the thing is so one of the difficulty when you look at it so the question then will be will you be open for one-on-one mentoring or will you be doing the TFI fellowship for 30 the class that you are handling right that those are the kind of questions that you will have to I mean as a human you will have to uh evaluate and then assess for yourself and then do it uh I mean I'm not saying that the answer that you said is wrong even I may have a similar uh uh answer I mean I would have had that answer maybe 10 years back not 10 years uh even 15 years back but some experiences help us uh it recalibrates our own thought process. See that is the value that the human has right. He has that discrimination ability. He has the ability to think through a little bit more uh from a I mean yeah from a human lens. I mean I don't have a perfect word

for that but uh the idea is uh any technology that you use it needn't be just lll right it could be any other technology is it helping you to think more critically reflect more critically about how you learned that is what technology should promote and that is unfortunately what many ed tech solutions s are not doing right. So the moment you are not able to do that then that power of technology is somewhere lost right very much true yeah hey I'm going to like I'm not going to my time here I I have this conversation outside I just want others also too yeah yeah okay I still see people coming up I think uh yeah I any other questions feel free to post it to the team uh I know these guys are also uh doing it for an hour. Uh yeah, I think so. No, I can take sir. You can take it. We don't have No, no. I think I have to do a cut right

now because it's already Sure. Sure, sure, sir. Like, no issues. But feel free to Yeah, feel free to post your questions all of your email address or you know my handle in discourse. So if you want to create a post in discourse and you want me to look at it at a later point of course I don't know whether I'm going to look at it but definitely I'm seeing a lot of uh posts uh in discourse on a regular basis. Uh uh I may not be interacting but it does not mean that I'm not seeing your posts. I'm seeing many of the posts that are there. I'm seeing many of my uh the post

where I'm tagged. So if you feel that this is a conversation that you need to extend feel free to put it over there. But I'll always have this question. How

are you thinking? How are you thinking about this problem? What is the value that you are getting out of the thinking process? I think these are the two major things that I want you to take back uh from this uh particular session. Yeah. Yeah. Go ahead, Susita. I think last question. Yeah, last question. Yeah, absolutely guys. Uh good evening sir. Thank you for an interesting session. Um my question is quite a general one. I just want to

know whether AI will be here as our companion or it will overpower us in the coming future. This is kind of a general question. It's not totally related with LLM but it is not beyond that also. No it it completely depends on uh your take of it right. I

mean okay how do I put it? I mean how I'm trying to figure out an analogy. Yeah. So the question is uh I mean so will will the entire country be converted into cities or will there be a mix of rural and urban that continue right? Does it happen? What is going to happen? Yeah. Will every village be converted into a town? Will it be converted into a metro? It's almost like that right. It's it's a lot of it depends on how we are using uh I mean as response I mean how responsible we are and that will determine and that's why uh many a times uh whoever are uh there is also this critiques of technology use also alongside uh those are the set of people who are keeping us in check who are asking us to re-evaluate rethink about this entire uh thing Right. So that's that's something uh so

will AI be a companion or will AI overtake? It will completely depend on how we are interpreting AI, how we are using AI. Uh the moment we offload all our thinking to uh AI uh I I mean that's a very ominous outlook. Uh yeah, I mean we can see volies happening throughout in maybe 20 years or 25 years uh over here and I I'll I'll definitely be on the spaceship in that case. Yeah. Hey, are you in LinkedIn? Are you in LinkedIn? Yeah. I'm

there in Okay, I'll come. Yeah, we'll come. Okay, then. Thanks a lot, sir, for this great lake session. Uh, since you're busy, I think you can. Yeah, thank you so much. It was like so much

informative and I just felt like going back to my old days. I mean, I'm so the effectiveness of my session, one person has been deeply impacted. Whether I will take it like that or have uh 40 people 40 odd people be touched by Delta X. See that's a call that you have to take right. Okay. Yeah. Thank you. Thank you so much. Thank you sir. Thanks a lot sir.

Thank you sir. Yeah. So think can you just like share your screen? We'll quickly go through the orientation process. Uh I can drop off now right? Yeah sir.

Yes absolutely. Thanks everyone. Thank you for being this audience and for all the chat answers that has come. Can somebody share the chat transcript with me at the end of Thank you. Bye bye.

Uh yeah. So guys we will just uh go with the queries that you have. So basically about the event. So you guys can like

raise your hands or I think we can take it from the very first as you see. So just can we have uh some opinion about that that how do you guys want it to be done? I think uh first you explain what's going to be this time and then we can ask the questions otherwise it will be never ending session. Sure. Just a minute. Sh. Just a minute. I think S you can basically go

ahead. I'm just basically coming in a minute. Jan, can you just share your screen? You're muted, by the way. Yeah, just just Excuse me. Yeah. So, accidental can you just share your screen? Are you Yeah. Are you facing some issues? Yeah. No. Excuse me. Yeah. Go

ahead. Uh is it now going to be a brief explanation of what? Yeah. Yeah. Correct. Correct. Absolutely. A brief explanation about the event because some of the students are new to the event and they have some queries. Uh so I think Susita ma'am can start uh with her feedback. I think so till he till he's like sharing the screen.

Am I am I speaking visible? If you can ma'am. Uh are there some old participants? Uh I guessed that he actually won the last time. No, today unfortunately some people had to basically like you know like kind of like drop off from the event because there was a blackout going at the time. Yeah. So I I'm getting some of the messages saying that uh I had to leave uh like because of this reason. That's

very unfortunate. Anyways uh yes Jenan I hope you can go to the page. Yeah. Okay. Absolutely. The logic loom page just basically go up to the header. Uh yeah. So I think the Yeah, the screen is visible. I think you can go to the rules page uh quickly. So like this time we

are uh expecting uh people uh to like participate in total of like four rounds. So in the first round what we usually have is we create a kind of a creativity round. Now what is a creativity round? Have you uh has anyone like participated uh in our event? Uh here we have around basically like 1,500 unique participants till date since like this is our fourth and the final like edition of the event which has been running very successfully uh at paradox till date. Uh if you guys are new so

like let me introduce. So in this basically round what you're what you're going to expect. So you're going to expect some you know some how to say like basically weight what basically questions like you're not going to be tested on something that highlights on some say rot basically like memorization or like something like that like you're going to get questions that will actually uh you know like uh force you to think out of the box and to solve the basically problem in a very unique way. I mean you know you can think it off as an out of the box thinking you know like process or a very a critical and like basically like creative thinking you know sang do you want to add anything here? Yeah. Yeah. Absolutely. So the creativity or that that part that part comes over here it's very crucial. Why?

Because let's say someone they don't know about machine learning or something all the technical jargon. Okay. All you need to use your basic logic. If you know basic things then you would be able to just solve those questions. Let's say

something uh on decision tree. So you don't know the jargon like how to you know split nodes and all that. You don't have to go by that particular specific things. What you can do, you can use your normal logic. Let's say you have a question where they're asking for choosing some animals who got four legs then uh whether they have four or not on that something like that. So you can just go on by decision making. Okay.

So you have three options or two options. Okay. So four legs. So you would go to that four legs branch or something. Then again for or not according to the question you would go to that particular branch or something and then at at the end you get the leaf node and you get the answer. So something like that that's the creativity part it's it's not necessary for you guys to know about all the technical jarens. That's the thing.

Yeah. And adding to him I I would also say that uh you know if you are basically foundation level like students please go ahead and participate because we have a separate basically like category for you guys. So, so you're going to basically compete against your own peers who are at the same like level as yours, right? And another thing is thing is that see questions will have an ML or AI essence in the initial rounds but we frame the questions in a way such that you don't need to know like prerequisite or any other A IML like knowledge. You should be able to answer it if you're able to think. So that is

the only key to uh you know basically achieve uh in this first basically particular round as we say like you would understand like say if we give a question we would the what is the motive of the question the end goal is that you somehow fall in love with ML. So that is our end goal uh of this event right like the very uh goal of this event when we had started this in 2023 at like paradox was like ML for like everyone. So though we uh you know uh so this kind of initial like rounds that you know basically foster the creativity and your basically critical thinking we can only arrange this in the offline uh this basically one month long paradox uh because we can't usually basically accommodate so many like rounds in our online basically mar like editions uh so we start with the basically ML challenge there but when it comes to this basically the Mayfest so and Again I would say that if you are not coming to the campus please like register yourself. I have like provided the link. Our event team has also like provided it because uh the initial three like rounds are like going to be held in a hybrid mode as I say. So so you have the option

to participate online and everything in our event happens via our own dedicated portal which is app.logicloom.padups.org. So after you like register all of you should be able to like log to our app. We are going to provide you access shortly. Uh now come also another point is that the first round is a timed quiz. Okay. So we will be giving you certain slots.

Say uh our first round starts on 10th ri

2025-05-11 10:14

Show Video

Other news

US Plans to Grant Saudi Arabia More Access to AI Chips | Bloomberg Technology 2025-05-16 18:05
US AI Diffusion Rule, The Future of High-Speed Travel | Bloomberg Technology 2025-05-15 17:31
The Evolution Of Technology And Spacecrafts From Today To The Year 4000 2025-05-07 14:21