07 Will technology have our back with Thomas Vander Wal

Show video

Setting the stage The whole model for how humans got out of  caves is built on sharing of knowledge,   to move ourselves forward as a human race  relies on us sharing information and being   able to point back to truths. Where did this  come from? What's the background on this? And   being able to lean on things. Generative AI  for the most part, doesn't care about that. JANE I'm Jane McConnell and welcome to Imaginize  World, where we talk with forward thinkers,   pioneering organizations and writers of  speculative fiction. We explore emerging trends,   technologies, world-changing ideas  and above all, share our journeys,   challenges and successes. Today I'm talking  with Thomas Vander Wal. I've known Thomas   for about 10 years or so. We're both part of  an online discussion group in Slack that has   existed for around 10 years, and we actually met  face-to-face in a conference in Washington, D.C.,  

at KMWorld. Thomas has had a wide-ranging career  with a lot of deep dives into different aspects   of technology. What's interesting  is he's always focusing on people,   what tech means, how it helps us, or as  he likes to say, "How it's got our backs." You can discover a lot more from his website and  his Wikipedia page. Today we're going to talk   about sustainability and the role of technology,  which can be positive and can be negative. We're   going to cover some different points starting with  AI, especially generative AI or performative AI   mimicking human behavior rather than pure analysis  of data. We talk about trust. It's powerful. It's  

ambiguous. Thomas has done some very interesting  work with students about what that word means to   them and how it can be a very confusing  word if you don't think about it clearly. We talk about how we can distinguish between  human content and AI content. Is it a legal  

question? An ethical question? Simply a  question of honesty? Another topic that's   very interesting is the famous 15-minute city,  and Thomas's vision of how it fits in with the   global world that we live in today. Two different  complementary parts that fit together. We talk   about education and how AI can enhance learning  in the near future. And speaking of the future,   I asked Thomas what his vision is of the  future in the next 10, 15 years. For him,   he says that a major key is technology, which  we need to create greater sustainability,   but it depends so much on how we use it and how it  develops. We cover a lot of things, so let's go.

Well, Thomas, it's great to  see you here. You and I have   been communicating online for, I  think, it's been about 10 years. THOMAS Really good to be here. Thanks for the invitation. Our Slack group is very active and I have  learned over the years to really appreciate   your... You're one of the people who has  a very deep understanding of technology,  

but the thing is you don't approach it as  tech, IT. Whenever you talk about technology,   you're always into what it means for people, what  it means for the way we work, the way we live. You   bridge technology and people. What I'm hoping to  do in our conversation today is bridge a different   bridge. I'd like to bridge from where we are  today and where we could be 10, 15 years from now,   because I have a feeling you have quite  some views on the future too. Is that true?

A little bit. Part of it comes from looking  at today and looking back. I've been in and   around tech world, tried to escape tech world by  going into grad school in public policy. Came out,   had one or two public policy related jobs and  then it was fully back into doing essentially   tech work, trying to have a deeper understanding  of technology and its impact as an enabler for   people to do work, communicate, and to  essentially have something behind them.   Something that has their back as they're  doing what they need in their own life. I like what you just said, "Something  that has their back as they're doing   what they need to do." What do you mean by that? Essentially, technology being able to do a fact  check, do a spell check as a very basic support   system. Being able to have digital calendars  that are quite accurate, unless things fall out  

of the calendar. Being able to have something...  A calendar that is on your desktop, it's in your   pocket, wherever you're going. Essentially it's  a tool and an enabler for what we do in our lives   and have always looked at technology as being  that enabler, as being that assistant for us   and whether we have other people with those roles  in our lives. Technology is a way to collaborate   and work together. Whatever, how simple or small  something is or grandiose and incredibly complex   things are. Technology is something that can and  I view should be able to help us along the way.

JANE Right. I like your phrase, "Have our back."  I think that's really accurate and it's very   visual and very human. Speaking of humanness, I'd  like to talk a little bit about sustainability.   It's a word that we use all the time and  I'd like to know how you would define it   and how technology could have our backs,  or not, when it comes to sustainability. Sustainability, from my perspective, is  being able to have understanding of the   limited resources that we have here on  earth. Being able to understand we don't   necessarily have a history of using renewable  resources. Things that are not of abundance,  

things are limited, and switching from  things that are not renewable resources   to renewable resources or things that are in  more abundance and not of limited quantity,   but still give us the same quality or close to  what we've been doing with limited resources,   which are also damaging our global home.  Essentially looking at technology where we will   be in the future needs to be able to be friendly  or kind to that sustainability. Computational   processing and excessive computational expense.  Things like Bitcoin was just eating power like   crazy and energy like crazy and it was  not a good thing for the world around us. As we look at generative AI, it is doing something  very close to that as far as computational power   that is required to churn out things with  hallucinations. What is the value of what we   are getting out? How can we have technologies that  will have our back? The traditional AI ML models,   they can be somewhat computationally expensive  as well, but being able to see things that   humans cannot see and being able to have  predictive models and understanding, "Oh,   we've got a problem with this." Being able  to understand. As the climate is changing,  

wind models, other things that we are  depending on as renewable resources,   seeing those shifts before  they are humanly detectable. Thomas, I want to cut you just for a second  because you talked about L models. I think   maybe you could explain that for some  listeners who might not know what that is. THOMAS The large language models.

JANE Yes. Essentially having very large models of discrete  data down to very small points, and essentially   the generative AI is using the LLM to have small  points as predictive models for what would come   next as an output. The generative is essentially a  performative AI where it is trying to sound human,   sound like it is something that is natural  that is coming out. Prior AI, ML models,  

machine learning models, essentially are going  in looking at those relationships essentially   as a sensory component and being able to say,  "With this data that we are looking at, here   are endpoints. Here's a problem that is happening  or a shift that is happening. What are the things   that are highly probable to cause those?" Being  able to look at wind shifts. Being able to look   at things like beach erosion. All sorts of other  things. What will things look like and being able   to do predictive modeling based on much improved  systematic historical background and data models,   or data that is out there so that we're able  to move forward. Are you hearing jackhammer?

JANE I just heard something. Have you got  some work going on at your place? THOMAS Oh, it's not my place. It's behind  me with very large jackhammers. What you're describing there, Thomas, is the way  analysis of data is used to predict things for us. THOMAS That's not generative AI, is it? THOMAS No, it's the... The earlier version of AI? Generative AI is performative AI There's a bunch of different models  and perspectives of what AI does.  

The generative AI is more of a lighter  weight, essentially performative model. JANE Performative? Explain what you  mean by performative. I like that. Essentially, if you think of what is in front  of the curtain on a stage. It's the act and  

the performance that's happening in front  of the curtain. It looks like it's human,   or relatively human, or relatively of a human  creation and mimicking a performance. That's   something that would come from a human rather  than a lot of the AI ML that we've had before   is all behind the scenes doing deep work, being  able to discern models, patterns, shifts, changes. JANE We are expecting generative AI to talk  like we talk. That's what people do   when they go online with ChatGPT and  they ask questions. I've done that.   It's a fun game. I see what you mean by  performative. That's a great concept.

pi.ai is like a confidence game ChatGPT, one of its traits is that it comes off  and it tries to be official and correct and very   confident in what it's putting forward. A  lot of the different generative AI models   and applications, they have very different  conversational models around them. Some of   them are doing... It's pretty much straight out  of like a confidence game that a conman would do   or like an attorney or a doctor or a nurse,  someone in healthcare where you're trying to   build confidence between somebody and it's really  entertaining to play with them. pi.ai is one of   them and it will get you to answer questions.  You will ask it a question and it's like,  

"Tell me about this." And you know that it's  wrong. And it's like, "No, that's not quite   correct." Or, "It doesn't seem right." And it  will say, "Oh, do you have experience with us?" And you say yes. And it's like, "Oh, tell me  about it." It's getting you in a conversation  

and after about 15 minutes you're like,  "What did it just do?" And I'm like,   "The model is really good." It's very much like  being in a doctor's office or sitting next to a   conman on the train. It's very much that, "Oh,  that's a really great idea, can you tell me more   about this?" And your guard starts going down.  It's like, "Oh, I have the same understanding,   the same opinion." It's that model that works  really well for getting people to interact   and think that it is doing well. Whereas the  ChatGPT is very authoritative, very confident,  

even when it has no idea what it's talking about,  but it doesn't have any idea of what it's talking   about because there is no human existence  there. There is no understanding of right,   wrong, correct, incorrect. It is just being  performative. Sounding like it's human. JANE I gather it uses a lot of material that's  collected on people's websites. I forget,   in the group that we're in Slack,   that was someone who talked about ways you  could find out if your website had been...

THOMAS Yeah, it's been scraped or... And to my astonishment, my website, which  is not huge had been scraped and not as   much as another guy in our group. I'm  sure you have a lot of different stuff   online. You've been scraped probably. On  one hand, what difference does it make?   And on the other hand, is it hurting  the people who create original content? THOMAS Coming from a background where there's academic  proof of where ideas come from and being able   to point to things and being able to say,  "Oh, I got this idea." It's also just a   really good human trait to be able to say,  "Oh, I read this from Jane. Jane says this.  

Go read Jane's piece. Here's my take on what  Jane is saying." And not being able to have   that human link back and giving credit. The  whole model for how humans got out of caves   is built on sharing and sharing of knowledge. How  we share knowledge, how we share good information  

to move ourselves forward as a human race  and improving the human condition broadly,   relies on us sharing information and being  able to point back to truths. Where did   this come from? What's the background on  this? And being able to lean on things. Generative AI for the most part doesn't care about  that. It is just spitting things out. That, I   think, is one damaging piece and you don't know if  something has any fact behind it. The AI models,   the LLMs are going out and scraping things that  it has spit out and other LLMs have spit out that   may or may not be correct. It's essentially  bad information feeding on bad information,   which is really problematic. From a creator  standpoint, there's financial considerations  

for being able to have your IP that you're  sharing. You may have been paid for it,   you may have been sharing it freely, but quite  often there is something financial for that,   either building a network for business or  for being able to help others and whatever   you're doing financially to support  yourself, your family and so forth. A lot of that is based on  what you share, how you share,   and being able to get some feedback and  pointers back from it. It's just there   are known folks in various communities who  will pick up other people's information and   share it as their own in conferences and  sometimes it is straight copies of slides   and other things and passing it off as their  own. It is highly frowned upon. Financial  

part of it is also reputation and respect and  generative AI models have none of that so far. How is that going to develop  so over the next 10 years? I think part of it is being able to push back  and have a proof of where things came from. The   New York Times is suing OpenAI because OpenAI had  scraped the New York Times and its collection and   what it has. They didn't pay for it. They didn't  respect the licenses around it. That helps. Also,   essentially having regulation around  stating that something is created by   an AI or created by a human, or it was  edited by an AI. I know an awful lot of   people who in their blogs and their work that  they share out, they have a clear disclaimer,   "This is all human created content. This is  mine." Not having that, I think, becomes a  

differentiator and it comes down to the security  and privacy constraints around it and also being   able to sort through what is truth, what is not  truth. What are made up facts and other things. Coming out of a background where you do academic  work or even professional work and it's like,   "Well, where did you get this? Is this a  hunch that you have or is this something   that is actually proven and you have something to  back it up?" It's being able to prove your work,   prove where you got something. I  have strong feelings toward being   able to have that and being able  to know where something came from. I wonder if it's going to  come down to legal questions? A lot of the legal and regulation that has  been talked about within the generative AI   community has largely... The companies  that are out there now have a lead,  

more or less. Essentially, the regulations  that they are talking about are protecting   their lead rather than protecting  the sources and protecting humanity. JANE What do you mean? How can it protect their lead? THOMAS Essentially not allowing other people to copy  their large language models. Being able to,   if you're using... Quite often there will be LLMs  that essentially are a mix of various things.   They'll put questions out to a few different  places, bring things back, and then they work   and massage through them. Part of it is also  for fact checking. Being able to do a query  

out to five or six different LLM models. Bring  it back, being able to do fact check. Can I do   a search on this to be able to find this phrase?  Being able to find this resource. Does it exist?   Being able to do that sort of thing. And that is  one of the things, being able to use other models   in a mix is one of the things that is frowned  upon by those who were early into the field. JANE Right.

THOMAS We need to improve what is out there.  What is out there does not come close   to meeting the hype that is generated around it. We need to have ways of giving value to content  that is created by people where it's indicated   that it's created by people or there needs to  be maybe a law. I can't imagine it happening   in many countries. A law that says it has to be  clear if it's a person or if it's AI. I don't  

know if that could happen or not, but I think  something has to happen, something has to change. Likely it's going to be from the bottom up and  people essentially claiming this is all human   created content. Some people use generative AI as  essentially their rubber duck that they talk to   and ask questions to and work through ideas that  they have. It's essentially a smart rubber duck   where it's you're talking to it, working through  ideas, talking to it and realizing as you're   talking something isn't correct, but being able  to get some feedback from a thing helps them along   in their process. I don't know where that fully  fits in a all human created disclaimer, but if   it's somebody coming up with their own thoughts,  their own assemblage of ideas, that becomes really   helpful. A lot of our discoveries as humans and  things that have moved things forward greatly   have all been from a human looking at things  in the adjacent possible or a realization as   you're looking at a handful of different things  rather than repeating things that are out there.

AI tools: detrimental or supportive to learning? 22:11 <a name="2211"> </a> Having an AI or a system that is repeating things  that are out there and taking what is out there   and bringing it back, I don't know if all of  that is helpful. Looking at students who are   using it and also talking to an awful lot of  people that do personal knowledge management   and heavy note-taking, they're like, "Oh,  have generative AI go out and read this   article for me and give me a summary and  have 1,200 words turned into 300 words,   and they put that in their notes. They didn't  learn anything from it. The ability to go   through when you're reading through something  and essentially having an argument with it,   "Do I agree with this? Do I not agree with what  this article is saying." Is a really important   part of understanding and building your own  knowledge base in your head. Having a tool go  

and summarize information and something that  might be highly important for you that is not   summarized is one of the things that gets lost if  you're having something automatically summarized. What do you think, and this is a  question I was going to ask you later,   but now seems like the right time, education?  The educational system today... It's one of   the things I studied. I did a survey of 15  things where I asked people around the world   what they thought... Education was one of them.  The question was, do the models we have today,   are they going to remain the same? Are they  going to change a little bit or are they going   to change radically? And what you just said  about students using generative AI to maybe   not write a final paper or maybe also do that  or even do research for them. It seems to me  

we're touching on what should or shouldn't  be done in education. What do you think? I will take that narrow slice. On education, I go  down a lot of different rabbit holes. But on being   able to use generative AI for education, there  are paths where it makes sense. Where if you're   going through something... Education and learning  essentially isn't a creative model where you have   some foundation. If you're missing a foundation...  If you're trying to do multiplication and you   don't have addition, it becomes really  difficult to understand multiplication,   which is why you learn addition first. If  you're in a subject and it's not particularly  

your domain, being able to have a generative  AI and being able to ask it questions like,   "What is addition? How does addition work?"  As you're trying to understand multiplication,   being able to get that background, lead you  to resources for learning and understanding   and being able to ask questions like, "Is there  a good YouTube video that summarizes this?" And   you get a decent foundation of understanding from  that video whether it's like the Harvard CS50 set   of videos, which are phenomenal. They're just  a magical tool for learning computer science. I'm not familiar with that. The  Harvard CS, it's a computer science? THOMAS Last name is Mallon. I think it's Jason  Mallon. (correction: David Jay Malan) He's   a professor at Harvard. He was undergrad there  as well. But he's got this very... It's a very   performative lecture on understanding computer  science and being able to understand binary   counting all these different things.  It is very visual. It is very, "Oh,   this just made it so much easier to understand  things." To the point where a freshman in high  

school could watch it and actually get a  solid understanding of computer science,   and have looked at some of the background homework  for CS50 class. And it is brutal. The class makes   it easily understandable. The homework pieces are  for people with a background and the rigor of a   Harvard system to go through and essentially  teach yourself and to work through things. But the classes and the lectures are just  magical at breaking things down. They've   come up with really good methods for participatory  interactions and essentially being able to stumble   on those... Asking Google for good resources for  easily understanding binary accounting that it   may or may not have you end up at c, which are  all free and you can go through the whole thing.

JANE You think they've been scraped by AI? THOMAS I don't know. I don't know that AI would do... I  think it would detract from the high value that   the videos give because part of it is that  it is a truly performative lecture series. I see, right. It's not just  information on paper that could be... Right. It's not somebody up there, a talking  head walking through things, but he will bring   students up. It's just turning on light bulbs  with the different slots for binary to do binary  

counting. This is one. This light is off, this is  two. This is... And it's like, "Ah, this actually   makes sense." And it clicks. It's being able  to... Generative AI or certain generative AI   tools will pick up that, "Hey, there's an awful  lot of people talking about this." And being   able to do switching search in that manner and  being able to find things and point to things.

JANE That's very useful. That saves time, if AI is pointing students to things that  they might otherwise not come across. Yeah, being able to use a generative AI tool  for learning to understand foundational issues   that you may have a gap. And if you're  taking your third semester of computer  

science or environmental theory and you have a  concept that you just have completely skipped   or you got a D on that test, you need to  understand what that is. And it's like,   "Hey, can you give me a good overview  of what this is? Can you give me more   information? Can you give me  pointers to good resources?" JANE Students should learn how to use these  tools then. That should be part of   education. How early should that start,  do you think? You're talking high school,   college maybe, I think. Can you  do this with grade school kids? Maybe? One of the other things with... There's  a tension that I see between technology and   humans interacting with humans. Being able to  bring together those who are near in thought,  

which technology is great for. Like I had met  you at KMWorld, I think the first one in DC. JANE That was a long time ago, wasn't it? THOMAS Yeah, and that was my second KMWorld, but we  got to know each other essentially through   digital environments that were bringing  people who were near in thought together.   We're geographically very far apart. As  well as many of the other people in the   group. Technology is really good for that.  But also bringing back the... With education,  

understanding how to interact with  other human beings to get worked on,   to share. How to work through difficult problem  sets as working together in a group, I think,   is highly important. Having something where  you're subbing out a human interaction and   learning how to be social as a human to human  experience and subbing in a human to technology,   I really think that needs to come later after  we learn how to do the human to human part. That's really interesting. That reminds me  of... I talked to Sugata Mitra in one of my   conversations for the podcast. The guy who put  the computer in a hole in a wall and the little   Indian kids came up and they learned how to  use a computer with no adult supervision at   all. In the end he developed a school in the  cloud where the idea is that students don't  

need teachers other than to guide them and to  help them figure out interesting questions.   That's a little bit of what you're talking  about. He's talking about young kids, 8, 9,   10 years old, figure it out themselves with  a computer at hand connected to the internet. There's a balance watching what is happening in  the world right now. Having that understanding of   human-to-human interaction, how to get along and  how to work together is highly important. Seeing   things that are happening, I think, large parts  of the populations have lost that. It is seeing  

things in a myopic perspective and things can only  be in a myopic perspective. My view is essentially   the one that is right, everything needs to  adapt to my view or it's wrong. As humans,   we didn't evolve to where we are now by  that model. We evolved by collaborating,   communicating and working together. Essentially  improve the things around us as well as our lives   and being able to move from of a subsistence  model to a more three-year model where we're   able to actually think thoughts like these and  not be chasing down our next meal. We have that  

affordance because we understand how to work  together and to communicate and collaborate. There's trust. I think trust comes into play. THOMAS Right. I have a difficult time with the word  trust, not because of trust itself, but because  

the word has many different meanings. And in  an awful lot of my consulting work that I did,   people would lean on trust because it is  a very powerful word. Around 2008, 2009,   I started banning the word trust  and you had to use other words. Oh, that's interesting. What words did people use?

THOMAS One of them was comfort. I find there's comfort  that I have confidence in what they're saying.   There's about 10 to 12 different terms that  people were using regularly. Essentially in   working with online environments and social  computing and social technologies, I started   using social comfort as one of the questions.  Are you comfortable using this tool? What are the   things that make you uncomfortable? And I could  get really good responses. If I asked somebody,   do you trust the system? Do you trust other  people in the system? They couldn't really   give a good answer, but they could surely tell  you that they had comfort or they did not have   comfort and where they found comfort and why  and where they did not have comfort and why.

I founded a much better term, and  whenever I come up into trust now,   I still even back off of it. I don't know  if they're behind me. I've got three or four   books on trust. The Fukuyama book and just  tried to understand what trust was because   it just really wasn't clear. It's not clear  because it means so many different things   because it has turned into a very powerful  word, but it doesn't have clarity behind it.

That's interesting. It is a powerful  word. I've heard about zero trust. THOMAS Yeah. JANE In fact, I think you talked about that  in the conversation we had online.   That term I'd heard about, but it wasn't  completely clear to me what zero trust was.

THOMAS Zero trust from a security standpoint. And I think  over time, not only does technology and where we   are heading need to understand sustainability  and be very friendly to sustainability and   think of what paths are we taking? Do we need  a faster computer? But if it comes at the cost   of sustainability, then that's a problem. But  if we're able to have faster computing and far   more efficient computing, well, that's  a good win. From a security standpoint,  

as we become more connected digitally,  security becomes more and more of a value   that people are realizing that they need to  embrace. The zero trust model is essentially   trusting nothing that you connect to and  being able to every time you log in or you   connect it with system, you authenticate  to it and it authenticates back to you. And you know what each other... It's not only  username password. It could be a passkey as  

well. It could be two-factor authentication.  It's like you're continually having to prove,   yes, I am who I am, there's not anybody in  the middle. Part of that is also just being   able to protect privacy as well and getting  to privacy models that are not in the people   who are trading on our privacy and our  data that is private data and privacy   related data. But essentially flipping that  model, going back to the Doc Searl's model,   rather than going to a vendor-based model  where the people essentially say, "Yes,   this vendor has the rights to use my data  for their purposes, not share it around." And rather than a CRM model, you've got a VRM  model. You've got vendor relationship management  

rather than customer relationship management,  where the people who are selling you something,   they don't have control of your data and the  data is one of the things that they can sell,   but essentially the individual has more control  over who has what pieces of information. It's been   interesting watching Apple, which now has in its  systems, when you connect to an application for   the first time, do you give it access to all of  your data that runs through it and that is on your   device based on different categories, or do you  want to limit it? And sometimes if you limit it,   that application will not work. You just have a  dummy app. It's just figuring out what you put in. It's a compromise. You have to make  your personal compromise, don't you? THOMAS Yeah, or you manufacture  information about yourself. JANE Put in false information, but then you  have to write it down and remember that   that's the information you  gave to that in that place.

THOMAS Yeah. The one that has always driven me crazy are  security questions. What was your dog's name? What   are this? And I'm like, "Most of that information  is searchable on the internet, what it's asking."   It's not really a security question. The  only way around it is to manufacture that   information. Then essentially you are managing  two to three different variables for each and   every system that you're talking to. It's like,  "What was your first pet's name?" And it's like,  

"Okay, for this system, it is this name. For  this other system, it is another name." Your   password management methods need to be able  to embrace all those different... What did I   tell this system? Had done that for years and  it just got to be absolutely crazy. I'm like,  

"Not only do I not remember passwords,  I don't remember what I told what system   in which what I told what." Even though I have  it in my password management tools. It's just- JANE It's putting a burden on us, isn't it,   to deal with a complex situation? And  I don't know how it could be otherwise. With zero trust being able to do  methods that we currently have,   being able to have two-factor authentication or  the new Passkeys that are starting to roll out,   that becomes something that is helpful and just  essentially having two different ways of doing it. JANE With facial recognition, you think  that's a good solution? Or fingerprint? THOMAS As one means, it's not necessarily foolproof.  There's an awful lot of people who look like   others and there's ways to around things, but  being able to have information that you know and   then being able to authenticate with something  that you have, like your phone. If you're using  

facial recognition or fingerprint or whichever,  I think, that's helpful. It's adding a little bit   of friction, but the value is just not losing  access to our own data and our own accounts.   I'm getting 10 to 15, "Hey, you requested to  change your password on this platform or this   platform each week from different things." I now  have two-factor authentication, other things on  

basically everything right now just because  it's like, "I didn't put this in." I'm like,   "If you didn't put it in, let it go." And I'm  like, "Okay, if someone is able to get in the   middle and either get to email or whatever, I've  lost access to something that I've been on for 15,   20 years." Someone either can impersonate me or,  I don't know, whatever value they would find. The 15-minute city in a global digital world Going back to the digital world that we live in,  do you think it is destroying communities? I'm   talking about physical communities where people  live. I'm very interested in the concept of the  

15-minute city, which came from a French guy  working for the mayor of Paris. Paris is going   very much in that direction. It's not perfect at  all. What do you think about that? Or maybe you   could explain it quickly for people who don't  know it and give us your opinion about it.

THOMAS 15-minute city is essentially being able to  have what you need within a 15-minute walk of   your door. This is fully based on a city. If you  live rural or if you're out in less dense suburbs,   that 15-minute city is not going to  necessarily be a model you can replicate. It wouldn't work for me. I  live in such a tiny village,   15 minutes it would take me to walk to  a place where there might be one store   and that's it. Anyway, that's not what  we're talking about. In urban context... There's an awful lot of layering, but essentially  it's being able to walk out your door, being able   to send a package, being able to pick up  groceries, get fresh vegetables. Have what   you need as essentials, and of the next level  or two above essentials, essentially within a   15-minute walk. One of the things that it does  is it starts building community and you start  

knowing your neighbors. The application Foursquare  looked at data patterns for people living in New   York City and essentially they had two different  hubs. One is home and one is work. They didn't   go outside of more than a three-block radius to  four-block radius of either one of those location. Going to restaurants, going to the dry  cleaner or laundromat or grocery. 90%   of their existence was within a three-block  radius. But one of the things that you have,  

there's a number for planned communities  that's around 7,000, is one of the magic   numbers on scaling. There's a lot of magic  numbers around all sorts of different layers   of social progression and social scaling,  and one of them is around 7,000, which is   for planned communities, an elementary  school, a high school, fire department- Medical care? ... and a small police station and medical care.  It is also roughly the 4,000 to 7,000 range,   is essentially where you feel comfortable  because you are seeing familiar faces. You   may not know them, you may have never said a  word to them, but you're seeing faces that you   recognize and that gives a human comfort.  That three block to four-block radius is   essentially a 4,000 to 7,000 person density  for both locations that they have. There is   some level of familiarity and people who live  in areas that don't have high crime rates,   but they have more than moderate crime,  they feel safe in their neighborhoods or   can feel safe in their neighborhoods because  they have familiarity with the people around   them. They not only know their neighbors, but  they recognize that people a few blocks away.   If they're going to the bus or they're out, they  know when there is somebody who is unfamiliar.

If you're going to a neighborhood that may have a  far lower crime rate, they do not feel comfortable   if you drop them in that because they do not have  any familiarity with any of the faces around them.   It's like you're going from a moderate crime rate  environment where you feel somewhat comfortable,   just because there is familiarity of the  people around you to one with low crime rate,   but you don't feel safe because you don't have  familiarity through that recognition. Being   able to have things where we're not having food  delivered. Our delivery services will tell us,  

"Hey, you actually have this within a few blocks."  You're trying to buy a fresh loaf of bread,   here's an option for you. And just being  able to bring things back and saying, "Hey,   this is available." Or ordering a book  from a large multinational bookseller  

and product seller and having it say,  "Hey, this book is actually available   from Susan's Book Nook three blocks away from  you. Do you want us to reserve it for you?" Being able to have that connection.  You're actually walking out the door,   connecting with your community, building that  human bond with other people in your community.  

I think it's something that would greatly help.  There's two human draws, one is being able to have   a local community of people who are geographically  close to you and having that comfort level with   people who are familiar, getting to know them,  holding doors for people, going into stores,   having them helm, being able to say hi or good  morning as you're walking past them. Then the   other one is being able to bring people closer  through near in thought. People who have similar  

interests. If you're somebody who either has  an interest in something that is not available   within a 15-minute walk or even within that city.  If you're in, let's say Paris, and you have a   large interest in Indonesian food and there isn't  anything close to you, being able to connect with   people who have an interest in cooking Indonesian  food, you're likely going to have to go online. Being able to bring those near in thought  and near in interest, bringing that closer,   but then also being able to have that human  interaction and being able to understand the   value of both of those and using technology  to enable both of those systems to improve   from an entertainment interest perspective  of food or just a knowledge perspective and   being able to have knowledge at your fingertip  that's globally shared and globally accessible.

JANE You see a blending of the two worlds? Yeah, and being able to have... If large  multinational company that ships books all around,   which I may have used more than once, but being  able to have them and recommend something to   Susan's Book Nook, they may get a cut of it.  5% or 2% for doing that recommendation. They're   making money on it. It's bringing connection  to your local community. You're essentially   looking at the technical side of things and the  social side of things, bringing them together.   We're coming together as humans as well as also  increasing human knowledge and understanding.

Do you think that's possible that these  big technical giants could be persuaded   to do that? They make more if they sell  to you directly, I imagine. They would   need some kind of incentive to recommend  local solutions even if they get a cut,   it wouldn't be as much as if they had made  the original sale, or I being too pessimistic? I think there's a model in there. Having watched  Amazon do local bookstores and where I live,   we had an Amazon bookstore show up. There's  some really good independent bookstores that   are around. The Amazon bookstore existed  for about two or three years and then was  

gone. They pay for all the cost of  running a physical business locally,   but being able to have their name, being  able to have a recommendation system,   them being in that life cycle of commerce, that  may be beneficial to them. If they're picking   up 2% on a local sale where they are not paying  the cost, they're just doing a recommendation. JANE And they could be doing it all over the world, basically. Nearly. That's interesting. Yeah,  I see what you mean. There is a model there.

Figuring out what that is with the local  bookseller, figuring out what it is for   the large multinational. There is human value in  being able to do that. Whether they have interest   in human value and societal value. That becomes  a question or if it's just pure profit. That's   a question that comes up often in many places  and sometimes the line is nowhere near clear. Sustainable technology,  key criteria for the future This is my final question because I think we're  going to close down. Overall, would you say that   you are optimistic about where we'll be 10 years  from now? Or, say, make it harder, 15 years from   now. Are you optimistic that we'll be in a  better place from a human viewpoint, or not?

THOMAS The big, it depends, is sustainability. One of the  large players in sustainability as a positive or   negative is technology, and being able to have  better, smarter, more efficient models that   are not eating as many resources. Considering  resources and sustainability in an awful lot of   the decision-making for building large generative  AI models or somebody thinks it's smart, again,   to go back to Bitcoin and massive energy  just being thrown down a hole. That becomes   problematic and just what are we getting out  of it as a society? What is beneficial from   they being able to do it? The AIML being able to  understand the changes to our environment and to   what's happening around the globe, we really  need to understand that. Being able to have a  

performative bot that sits on our desktop and is  eating a ton of energy on the backend to be able   to answer questions. I don't know if that's  a great benefit and a use of our resources. Are you on the fence about how  it will go? I know it's a simple   question, it's not a simplistic  answer. It's a complex issue. No, it's a very complex... Yeah, and it's one  of those where, because I've got to talk next  

month to Complexity Lounge on complexity. And it's  like, "Oh, for the answer to this tune in." Yeah,   it's a really complex problem. Technology really  needs to sort out where it sits on sustainability.   Those who are able to work through, one, having  more secure systems, having more systems that   respect our privacy. Being able to have coding  and systems that are far more efficient,   and also using renewable resources for that, not  necessarily swap credits, "We planted 700 trees,   so therefore we can set up this new server farm."  But being truly using renewable resources and not   doing the trade-offs. I think the faster  we can get to that from a technology side,  

the more that we can do with technology and  essentially have technology have our back rather   than having technology essentially becoming part  of the problem rather than part of the solution. JANE THOMAS, you need to write a book about that.  Does technology have our backs? I'm serious.   I'm not joking. It's an approach. Maybe  there's a book already out there that does   that. I don't know. I've never looked it up  from that viewpoint. I know there's a lot of  

people writing about AI and the goods and  the bads and all that, but you're talking   about it in a different way. I think you  have an idea there that is quite powerful. THOMAS Yeah, I need to get writing, again, outside  of back channels. I've been reconnecting with   an awful lot of folks over the last few months,  mostly being heads down for the last five years   working. The common thing I get is you need to  get back writing and sharing things out again.

Oh yeah, you've done so much,  Thomas, in your short life. Well,   I'd like to thank you for your time, and  if you do write that book, give credit to   this podcast to being the place where the  idea was generated, to use a common word. I will do that. I will put that in my notes.

2024-02-09

Show video