Hello, everyone, and welcome back to GOTO Unscripted. My name is James Lewis, and I'll be talking today, I'll be in conversation with Richard Feldman. Welcome, Richard Feldman. I think we're just gonna explore some ideas around things like language design, because Richard will introduce himself in a moment, but is the creator of the programming language Roc, and author of the wildly successful, I'm told, he paid me to say that, "Elm in Action." But, yeah. So,
we'll just have a conversation about languages, and programming in general, I guess. But, welcome. Richard, maybe you could introduce yourself. I created the Roc programming language, which is still kind of a work in progress. But I wrote the book "Elm in Action," and also spent a lot of time with Rust. And later in the conference, I'm gonna be talking about Rust and Zig, and, sort of
together. Very interesting language design. You were telling earlier, before the cameras were rolling, about how you were there when Rich Hickey announced Clojure, at the conference. I was. I was at JAOO, I think it was still JAOO at the time, in Aarhus, in Denmark. Rich turned up and sort of blew everyone away with this announcement of Clojure. And obviously, it's been pretty successful since. We've had many projects in Thoughtworks that have used Clojure
over the years. And I know there's a consultancy, JUXT, in London, that is very successful as a pure Clojure consultancy, actually, in the fintech, mainly in the fintech industry. What do you think? I mean, it'd be interesting to get your take on that. Do you think there are different domains where different languages are more or less suited? Well, there certainly seems to be an element of suitability, but there also seems to be an element of just, sort of, cultural momentum. Like, something will get traction in a particular domain. Maybe it is, maybe it isn't, like, especially well-suited for it, but then it just sort of perpetuates. So, the example that comes
immediately to mind is Rails and Ruby. Right. I mean, if you were to zoom out and say, aliens land, and they're gonna pick which of the programming languages are gonna become big in web development, I don't know why anyone would say, "Well, it's gonna be the one created by the Japanese guy that's only big in Japan right now, that's, the tagline is, 'Let's make programming fun.' That's what's gonna be used widely in industry, you know, and blow up in the next 10 years." I don't think anyone would have predicted that. So, I don't think it's necessarily just about,
you know, like, how well-suited it is, it's, like, the perfect fit, as much as it is, like, well, you know, one person, like DHH, made Rails, that resonated with a lot of people, and because Ruby was the language that he chose to make that in, yeah, he could have made it in Python. And he would probably say, "Nothing else but Ruby would have inspired me to make Rails." But I think you could pretty easily make the case that someone could have made something as successful as Rails in a different language. The thing, you mentioned Python. That's super interesting. Because I remember when Rails was massively taking off. And in North America
in particular, and in India, Rails became a huge thing. I mean, our founder at the time, was very taken with it, and some very persuasive people were talking about it. Obie Fernandez, for example. And it seemed that we suddenly had a load of projects in North America and India
using Rails. And we still do. I think the world's largest Rails project was a Thoughtworks project in Atlantic City, maybe. Really? But the weird thing is, it didn't spread to the rest of the countries that we're present in. So,
the UK was, still, remains still very much a Python shop. So, maybe it's not just a domain-specific application. There's also, like, a geographical thing going on Well, I think you could generalize that to culture. Certain pockets of culture might be geographic or might be just other things, that contribute to those things. I've spent an increasing amount of time over my career, like, learning about, like, why things get adopted and why they don't. And the more I learn about it, the more reasons I discover. And it seems like
there's just an inordinate amount of variables. And as programmers, we like to look for simple solutions and simple explanations for things. But, much like with...I would say another true thing, where I, the more I get into it, the more variables I discover, has been, like, performance optimization, where, like, when I was in school, you know, all the focus was on big O notation, and, like, what's the asymptotic complexity of this algorithm and stuff like that. And now I'm like, "That's, like, this much of it." That's the tip of an extremely large
iceberg. Similarly, with adoption, languages or technologies in general, it's like, you know, I would have thought, you know, early on, it's like, "Oh, well, people use that because that's the best thing for it. What else is there?" And it's all these cultural, and, all these timing, factors that come into it. I remember doing a thing with one of my colleagues about program language adoption. And certainly, it was as much about culture availability as anything else, it seemed. You know, are you better off picking something where you know there's a lot of people out there who you can just hire for it? Or, another example would be, a counter-example, you know, we had a publishing client who deliberately chose Scala because it meant that they could offer a fun, or more, potentially more, "fun" programming environment for developers to come in because they couldn't pay the same rates as the banks. So, there's this almost, like,
trade-off is, do you offer this more interesting, exciting environment, versus, okay, just, yeah, whatever it is, thousand euros a day for just a standard job for the developer in a bank. I think that's an underrated trade-off, is that if you're a company and you're considering a novel technology, and I've talked about this before in other settings, but this is something like, you know, at my previous job and my current job, we both have used Elm on the front end as, like, the entire front end, not just, like, a little part of it. Embracing that means that you get to be very selective about who you hire. We just filled a front-end role, and the recruiter was talking to me about, yeah, we, you know, hired this guy, and it was close. We had to decide between him and, like, you know, a couple of other people who wanted this role. Usually, it's the other way around, where employers are like, "I just wanna find anybody who can fit this description that meets our criteria, and it's really hard to find people." But you flip
the script when you're offering a technology that people wanna use, but that not a lot of employers are using. And that's almost sort of like a, you know, self-fulfilling prophecy, in the sense that if enough employers do that, then it flips back around. But then, by then, it's mainstream, so then it's not notable anymore, and now you are saying, "Oh, well, I can just find lots of people to do it, because it's become mainstream." But I think most people are aware of that side of the dynamic, but they're not familiar with what happens before a language gets mainstream and what the dynamic's like over there. I guess the big question is, did the person you hired have 25 years of Elm experience? Well, let's see, Elm was created in 2011, so yes. Of course. Which, I guess, when it comes to Elm,
I'm gonna throw my hands up and profess to not generally being any good at all at front-end software development. That's fine. It's not something I've done in my career, to be honest. Any front end that I program tends to look like a bad implementation of Excel. That's pretty much it. Well, Excel is not an easy thing to implement, so, you know. Well, funnily enough, coming back to Ruby, the creator of RSpec, Nicholas Nielsen, showed me an implementation of a spreadsheet written entirely in the const_missing method. Wow. Because if you think about
having a spreadsheet, you have to be addressing the, you know, by capital letter first. If you didn't have that defined as a.. Oh, that's hilarious. Wow. And you could do all the calculations in “const_missing”. Have you been surprised at the adoption of Elm and how successful it's been? Yes, but maybe not in the way you might guess. So, I would have thought that it would have been sort of all or nothing. I would have thought that either a language like Elm would just take over the world, or it would just peter out into nonexistence, and people would, you know, walk away from it. Because I've seen that happen with various languages. Like, TypeScript would
be an example of taking over the world. That's happening right now. And then CoffeeScript would be an example of something that sort of petered out and, you know, is not used anymore. Elm seems to have sort of, like, found a solid niche. There's just, like, a chunk of people who are like, "Yes, this is how I wanna do front-end development," but it doesn't seem like it's on track to take over the world. It seems like it's on track to be... Well, it already is, like, a self-sustaining thing, and it seems like it's on track to sustain. So, that's something that we've
seen with a lot of backend languages. There are plenty of backend languages that are not, like, no language has taken over the whole backend. There's just, like, people have preferences on the backend world. Whereas on the front end, it's very much been, you can use any programming language you want, as long as it's a JavaScript dialect. Like, it could be JavaScript or it could
be TypeScript or it could be CoffeeScript, all of which has the tagline, "It's just JavaScript," or implicitly in the case of JS itself. All of the other ones have been, like, kind of niche players. But if you think about it, I mean, like, on the backend, it's really common to have a language that has, like, low market share, but is, like, quite a healthy, active community, with lots of people in it. It's just on the front end, that's, like, a weird thing to be. And Elm being a front-end-focused language, I just never guessed that. I thought it was, like, oh, it's either gonna take over or it's gonna peter out. I didn't expect it to become more like a backend language,
in that it's just, yeah, there's a chunk of people who like to do it this way, and it's fine. You mentioned TypeScript. So, that's the elephant in the room in some ways, right? So what would you ascribe to...can you see, sort of, any particular reasons that TypeScript has sort of eaten the world, or there's some discussion about it at the moment? They're both, on the surface, surface, fairly similar ideas. Elm and TypeScript? Or...? Which two things? Well, so, essentially taking something that's gonna be able to be used in the browser, but offers may be a safer, allegedly more productive perspective on programming front end. I think, like, when I think about comparing Elm to JavaScript and TypeScript to JavaScript, and I guess also TypeScript to Elm, like, TypeScript and JavaScript, I mean, TypeScript is really like, "This is gonna feel like JavaScript, but with types." Elm is like, "I am a programming language,
and I run in the browser." It has no relation to JavaScript other than, like, as a compilation target. So, you mentioned, like, Clojure earlier. I would liken Elm to Clojure, except, like, even more separated from the hosts. Like, Clojure is very much like, "I'm a programming language, but I intentionally have some Java-like elements inside," but I don't think anyone who's written Clojure and has written Java would say, like, "Oh, this is a Java dialect," you know? But they do, like, share data structures and things. Whereas Elm, it's even less than that. It's just kind
of like, well, we use the same, like, string representation under the hood and stuff like that, but that's kind of about it. It's, like, this feels like a different programming language. Whereas TypeScript feels like this is a new take on JavaScript, I would say. I guess that's maybe, it's a good comparison, I think, with Clojure as well, because if you look at something like two different JVM languages, like Clojure and Scala, say, I mean, most people's entry point into Scala was programming Java without semicolons. That was the
old joke, wasn't it? And, whereas Clojure is a fundamentally different paradigm, a fundamentally different way of approaching writing code. That's a good point. I've talked to people in the Scala community who talk about there being sort of three different ways that people do Scala. So, one is, like, Java++, or Java without semicolons, maybe. Another is, I want a hybrid OOFP language. I want a language that has a lot of OO support and a lot of FP support, and I'm gonna use them together. And I can't get that from Java, so Scala is the way to go. And then the third group is,
I want Haskell, but my boss won't let me use it, so I'm gonna use Scala as my Haskell stand-in, and that's also a popular way of using it. But I don't see the same thing in Clojure or Elm. It's like, pretty much it's like, nobody's using Clojure as, like, Lispy Java. Everyone's using it as, like, Clojure. The same thing in Elm. Would you say... Maybe a bit random, but so, I remember a few years ago, when Google first, sort of, publish is the wrong word, but created and then started talking about Dart, the programming language. We act that in...we have a thing called the Thoughtworks Technology Radar, which every six months, we sort of take new stuff and sort of think about it and assign it, like, an assess or trial or hold. And,
at the time we sort of said, we put Dart on hold, on the basis that we were super worried that adoption was gonna be limited by the fact that other browsers weren't gonna jump on board, right? Because it was very much a Chrome... With the VM part of it? The VM part of it, yeah, yeah, yeah. And of course, that's now come back, right? I mean, it shows what we knew. Like, some years later, we now have Flutter, which is kind of, you know, very much being adopted quite rapidly at the moment. I kind
of find that kind of interesting, where you've got something that sort of, at one point in time, wasn't the right time for it to be adopted, but then later on, it suddenly is the right time. Well, I think that's a great story. That's the...I mean, Dart, to me, fits the same category of adoption as Ruby, where it existed for quite a while like Ruby was just big in Japan for a while. Ruby was created to be, like, "Let's make a language..." I mean, Matz was like, "I wanna make a language that's fun to program." That was the word he used. I mean, Dart, as I understand it, was created, basically, because of the VM, because Lars Bock, you know, had done V8, and was frustrated by how difficult it was to do certain optimizations around JavaScript, and he was thinking, "If we just had a different language that felt a lot like JavaScript, but which was different in certain very specific ways, we could make a much more efficient VM implementation out of it," and that was kind of the motivation behind creating Dart. And,
you know, if you think about it, why would people want to adopt that unless you're a VM author? It's like, okay, but I'm over here doing my web development job. What's the pitch to me? I don't, you know, care about how easy it is to optimize the VM or how optimized it can be. I just... Especially since, you know, you and your team, Lars, did such a good job making V8 a lot faster. Thank you, Eric, as well. What's in it for me to switch from JavaScript or
CoffeeScript, which was big at the time, to Dart? But then the answer comes with Flutter. And again, you could make the point, Flutter didn't have to be implemented in Dart, but it was, the same way that Rails didn't have to be implemented in Ruby, but it was. And that, I mean, if you look at what percentage of Dart usage in the industry is not Flutter, I would guess it's very small, similar to Ruby and Rails. I mean, it's, like, overwhelmingly Rails, it's overwhelmingly Flutter. So,
the term I use for this is, like, this is, like, the killer app adoption explanation, is, like, there's some application of the language that's so popular that it just brings the language's popularity along for the ride because people wanna use that thing, and that thing is implemented in that language, and they want it so bad they'll use whatever language it happens to be implemented in. That's quite a nice segue for me to go to talk a little bit about Rust, maybe. Because you mentioned about your new language, Roc, that you're writing. We'll come on to that maybe in a minute. Sure, sure, yeah. But you mentioned the fact that the compiler is written in Rust, and that's another... I mean, I think, well, we are starting to see, in terms of Thoughtworks, and our clients, adoption in very specific areas, for Rust. Specifically, there's lots of interest, for example, in automotive, or, you know, sort of safety-critical systems and these kinds of things. What made you choose Rust yourself? This is going to bring a little of my talk
into this conversation. No, that's cool. This'll be published a lot later, so... Basically, it's important to me that the Roc compiler be very, very fast. I want it to run as fast as possible, and I certainly did not want to get to a point where I'd built this whole compiler out... I say "me," because that's what I was thinking at the time. Now it's a bunch of people working on it, and a lot of them are better at this stuff than I am. But, you know, I didn't wanna end up with a compiler that was very
feature-complete and very done, and then we're like, "And we can't take any more performance out of it because of the language we've chosen, that is, like, garbage-collected and whatnot, and there's just this ceiling we cannot possibly exceed, no matter how many hours of performance we put into it, unless we rewrite it in, like, a Rust or a C or C++ or something." And I thought, "I don't want that to happen. I want this to be as fast as it can be, and I don't wanna hit that ceiling." So, that meant one of a couple of different options. One was to do C or C++, which I'd had some really bad experiences with earlier in my life around, like, getting memory unsafety-related bugs that were painful to track down. And I was, like, well, the pitch of Rust is that you have the maximum memory ceiling, but, somehow, and I didn't really knot the time, somehow, they do compiler things to help you not run into those problems. And so I
thought, "Well, that seems like kind of the only game in town that fits all my criteria. There's no performance ceiling, and yet I'm not going to get these memory unsafety bugs that are a nightmare to track down." So, I took the plunge, and I'd done a little, like, toy thing in Rust, or I'd built a command-line app that I'd never quite finished before. So I had a feeling for the language, and I was like, okay, I can get this. I can stumble through it. And now I feel very comfortable in Rust. But when I started, it was like, just because I had this list of criteria,
and that was the one language that fit them all. And you got to choose as well, which is the nice thing, right? Yes. Very important. I remember my colleague, Erik Doernenburg, he's based in Germany, he's a head of tech at the moment in Germany. And he did a great talk at one of these events on Rust. And it was back at the time when not that many people were adopting it. So, it was quite early on. [inaudible 00:17:54] and it was a bit of an overview on why Rust,
and, again, actually why some of the other languages that have started to appear around, which, you know, like Go and, oh God, I always forget the Apple one, Swift, is it? Yes, Swift. And why they, you know, what problems they were attempting to solve, you know, which is around memory safety. It's something like, I can't remember the exact number, but some very high proportion of bugs... Seventy percent of CVEs, yeah. ...at Microsoft. There you go, right? So,
yeah, I mean, this was around that. But he did this lovely little thing at the end of it, where I think he...he didn't...it wasn't Conway's, it wasn't Game of Life, but it was a similar kind of agent-based kind of implementation. And he always uses that when he's learning a new language, right? You need something, you know, you need some framework to understand when you're learning a new language. And he started running it, and he was sort of running multiple iterations of it, and he was looking at the performance. He was like, this is a lot, lot faster than the, I think it was a JavaScript implementation, ridiculously faster, orders of magnitude faster. But he thought, "Actually, I thought it'd be better than this." And he realized
he hadn't turned off, and I'm gonna get this wrong, but he hadn't turned off the, there's some kind of, like, setting in Rust, I think, which you can turn off. It's, like, "production mode" versus...does that make sense? Oh, yes. This is an optimization flag, yeah. Right. And he'd forgotten to use that. And then suddenly it was, like, three orders and four orders of magnitude faster. Which I quite like as an idea, yeah. That sounds about right. That flag makes a big difference.
Maybe let's generalize it. There's Rust in particular, which is spiky, I'm told. I've only read some books. I've not made any serious attempt to learn it. But I'm informed it's quite spiky. There's quite an adoption curve. How do you go about adopting or learning new languages? Yeah, so.... Or do you know enough now that you just go, "Oh, it's that sort of thing?" Well, it's funny, because you mentioned, like... I know a lot of people who like to do the same, like, "I'm gonna learn a new language, I'm going to pick a project, like Game of Life, that I'm very familiar with, and implement that in the new language." I'm almost the opposite, where I always need to have some specific project
in mind first, where I'm like, "I wanna build this in this language," or this, whatever the new technology is, and then that motivates me to push through whatever the learning curve is, because I'm like, "Well, I can't get it any other way, so I gotta do it." Whereas, so, I guess maybe I don't tend to just seek to learn languages just for the sake of learning them. It's more like, there's some problem I wanna solve. This seems like the right tool. All right, let's go. So,
I don't think I've ever done the... And picking the easy things, right? I'll just write a compiler. I'll just write a compiler, which I'd never done before either. But, I guess, I don't know, at least for me, the hard part of learning something new is generally sort of finding the motivation to climb over these obstacles that I hit, whatever they might be. And I also am aware of, there's an element of, if you pick a project
that's too hard and a language that's too hard, and, you know, like, those can kind of compound, for sure. But I had previously done this little command line app in Rust, where, actually, the motivation there was, it was the Elm test runner, and now there actually is, somebody else separately went off and, like, did a different Elm test runner implementation in Rust. But at that point, it was mostly just frustration with Node.js APIs, which is what the one I'd written previously was in. And, I one day... Not because Node is blazingly fast. No, it was nothing to do with performance. I wanna write this in something that has a different set
of APIs, shall we say. And I didn't really wanna use Go, because I didn't have any particular interest in Go. And I was like, "Well, I wanna learn Rust, and I want to have a codebase that I can maintain that's not Node.js anymore. So I'm just gonna rage rewrite it in Rust." And I got, like, 70% of the way through that, and I was like, okay, I have a feel for this language now, and it, you know, feels...I'm not great at it, but I at least can stumble my way through doing things.
And I have this code base that, as happens with many projects at around the 70% mark, I was sort of, like, okay, yeah, but do I really wanna the rest of the work to get this over the finish line, and then maintain that codebase and then new contributors are not gonna know what they're doing, and so on, so I ended up kind of putting it on the shelf and not finishing it. But somebody else separately went and did it. I definitely would agree with...the learning curve on Rust is a downside. It's quite high, and it's also, it's not...like, some languages, I think, have a high learning curve because, like Haskell, for example. Haskell, I would say, has a high learning curve, in part because of a lot of the things, you're encountering for the first time. I've never heard of these concepts before, I don't know what they're about, and there's just kind of a lot of stuff to learn. In Rust, I would say the thing that's the hardest
about the learning curve...and people often talk about "fighting the borrower checker." So, the borrow checker is kind of Rust's, like, marquee feature. It's what sets it apart from other languages. It's what gives you the memory safety. But, at the same time, it's not so much, like, you can just sit down and, like, once you wrap your head around the borrow checker, you got it, and it clicks. It's more like there's just a whole lot of things that all fall under the umbrella of borrow checker, but there are various scenarios. And I remember one time, it took me, I'm embarrassed to say, like, I think it was, like, two months or something, where I couldn't figure out how this, part of the compiler was blocked, and I couldn't figure out how to do the thing I wanted to do. And, you know, the borrower checker gave me an error, and said, "You can't do
this." And I was like, why not? I know this is possible. If this were in, like, C or something, I would just be like, "Here, take this thing and put it over there. Put it on this thread." And it was like, "No, you can't do that." And I was like, "Well, why not? Why can't I do that?" And
I eventually realized, I was like, wait a minute, do I just need to use, it was IterMut versus Iter. And what Iter...the difference is, iterating over these, Iter is, like, I wanna iterate through these things, and IterMut is, I wanna iterate with the possibility of mutating them. But it didn't occur to me to use IterMut because I didn't wanna mutate them, at all. But the problem was I needed to use IterMut to prove to the borrower checker that I had permission to mutate it, which meant that it was safe to put it on a thread. So, in this case, mutable was sort of a stand-in for,
"is uniquely owned by this particular instance." And I switched it to Iter and IterMut, and this thing that I had been stuck on for, like, two months, it was like, "Okay," right? And… I would love to have been in the room at the time, it was like, oh my God. But I bring this up as an analogy of even though I had that, I already knew the mental model of what mutable means "is uniquely owned, and therefore has permission to do certain things," it hadn't occurred to me that, I didn't, like, put two and two together with the implications of that, that, like, oh, if I want to put these things on threads, I need to IterMut, even though I'm not gonna mutate them. So, it's just a lot of stuff like that. It's almost like you're being more restrictive than you need to, in some senses, right? But because the mental model is, okay, this is a restrictive memory model, so I wanna be overly restrictive And I think, in this case, it was more of a language terminology thing, in the sense that, I think if instead of calling it IterMut, they called it, you know, IterUnique... I'm not saying that they should rename it. It's more just, like, if they called it that, I think I would have more quickly realized, like, "Oh, yeah. To hand these things out to the threads, they have to be unique
because the whole point is I don't want them to be shared across the threads." That's, like, another aspect of Rust that makes it tricky, is that part of what the borrower checker does has to do with when things are, like, available in memory, like, when they're in, the lifetimes of, like, when they're alive and when they're, you know, can be reclaimed. Also about mutation access, like, this thing can or cannot mutate it. Also, multi-threading and, like, which things have permission to mutate things, which has to do with, like, preventing data races, in addition to memory safety. So, there's just a lot of different
things that all kind of come together, and put it all together, you get a big learning curve. You've spent a lot of time building this compiler. But the aim of it, presumably, is to compile this new language. So, maybe you can talk a bit about Roc, and what makes it unique... Yes. ...and why? Why do you decide to write a new language? For Roc, the tagline is "Fast, friendly, functional." And I was just talking to Dave Thomas, and he mentioned
that he knows someone who made another language. I think it was, was it K maybe? Yes. Richard Feldman:, the tagline was "Fast, fun, functional," which I did not know existed, but it's very, very close to what I independently came up with. But the basic idea is, I really wanted a language that felt like Elm in terms of the ergonomics and the overall user experience, but which is for, instead of being focused on browser-based UIs, which is sort of Elm's bread and butter, I wanted it for sort of, not just, like, one other domain, but sort of, like, the long tail of domains. So, I'm not just thinking about, like, servers and commandline apps, although those are the two things that people are most interested in it for. Or desktop GUI applications, which I'm also interested in. But also things like... If you can replace Electron,
the world will be a happier place.. Well, that's a very big challenge, right? It's not an easy thing. There's a reason Electron's so popular. But definitely, I've always run into these little cases, where it would be, and Vim script is gonna be the one that comes first to mind. I wanna write a Vim plugin. I don't wanna learn Vim script. I don't wanna use Vim script.
I've heard, you know, it doesn't have a good reputation as a language. But what I wanna use is I wanna have, like, an Elm-like experience, this really pleasant experience I've had with Elm. But Elm, being a focused language, is not ever gonna get into that. There's never gonna be an Elm for
Vim script. So I wanted to make a language that was capable of being used in lots of different domains, while still feeling like it was, to some extent, domain-focused, like how Elm is. Richard Feldman:So, without getting too much into how we achieve that, there's this basic, high-level concept of platforms and applications. So, what we mean by that, an application is basically just, like, you know, my project. I'm building a thing. A platform is something like a framework, in the sense that it's sort of the foundation that you build on.
You never have more than one platform. You always have one. But unlike most languages, in Roc, you have to pick a platform. There's no such thing as, like, a platformless Roc application, or, like a, you know, framework list, if you will. And the reason for that is that platforms, although they kind of feel like frameworks, they're scoped differently. So, a framework, typically, like, let's use Rails for example, Rails will be in charge of things like database access, and how do you do routing, and, like, request handling and stuff like that. In Roc,
sure, that would be true too, but also, it's gonna be in charge of all of your low-level IO primitives. So, it's gonna say, here is all the things you can do, in terms of HTTP and, you know, database access and this and that. And for a web server, maybe you have, like, the full range, but you probably don't have, like, reading from standard in on a web server. Does that make sense? Yes It does. Maybe you leave that one out. Now, a better
example, though, is let's say that you wanna make a platform for, like, a database extension. When you're writing a Postgres extension, do you even wanna, like, have network access? Do you wanna have arbitrary file system access? Does that make sense? So, the way most languages do this is the standard library has all these really low-level IO primitives, and then there's certain use cases where it's like, eh, don't do that. Don't write to that. But a problem this creates in the ecosystem, for these, sort of, long tail of use cases, is that you use a library, and that library is like, "Oh, I can just, like, create a temp dir, and put stuff in there, right? And it's like, I don't know if I want you doing that on my database server, you know? And so, the idea is that, by basically making it so that you have to pick a platform, the platform says which primitives are available, the ecosystem will sort of naturally design itself to be accommodating to that, and to be aware of that, and to be like, "Oh, if I choose to, you know, use a temp dir or whatever, that's gonna restrict which platforms I can potentially run on." If I read from standard in, that's gonna restrict which platforms I can run on. Another thing is that the platforms, because they're in charge of the IO primitives, they can implement certain, like, sandboxing features. So, one example of something that I'd be...I hope someone builds in Roc, because they now can,
which I would love to use, as a sort of, a sandbox script runner. So, for example, like, this is something that Deno has at the language level, but in Roc you can just, anyone can implement it in user space, which is basically, like, you know, if I download a script from the internet, and I run it, I know it might mess up my machine. Like, it might give me a virus, it might write to places on my disk that I didn't want it to write to. But because in Roc you have this platform-application split, if I have a platform that's like, "I'm a commandline runner, but I'm a sandbox commandline runner," and because I'm in charge of every single one of the IO primitives, I can say, "Yeah, look. I give you access to all the IO primitives, but guess what? If you try to write to this part of the file system, or you try to read from there, I'm gonna prompt the user, and there's nothing you can do about it." So it's now as safe as a web browser, in terms of, you know...
That's very interesting. But at the command line. And I would love to have that, because I run stuff that I download from the internet all the time, and I'm either doing it in a VM, right? You heard it here first, folks. You shouldn't run stuff you download from... Yeah, well? And we all do, right? And I would love to have something where I just had this confidence that I don't need to audit the whole thing. I just need to look what platform are you using? Okay, it's the sandbox one. Great. Done. I think this is a really interesting idea, because, I mean, I've only sort of come across this maybe a couple of times before, but it seems to have...people aren't talking about
it. It seems now, but five years ago, there were lots of people talking about unikernels, for a different reason. This was about security, and about the kind of, you know, the different, yeah, the surface, the attack surface area, essentially. Can we limit the amount of stuff we're gonna compile into our OS so that it's not available? You can't even use any of it. It's just not there. And I think I had a line at one point that Docker, 30% of the way to unikernels. It's like, you know what I mean? That was five years ago. I talk about it, but it seems like, in some ways, a similar idea, but coming at it from a different perspective.
It's definitely about, I mean, I would say the thing that you have in common there is the idea of security through just, like, absolutely not making things available in the first place, rather than having them be available and trying to make sure you played Whac-a-Mole and locked everything down, right? Just saying, like, it's not even there by default, and we are only gonna opt into giving you access to the minimal set of things necessary to do whatever you need to do. Cool. And what sort of language is it? Is it a purely functional language? You said it's functional? It's functional, and I would say, like Elm, there's a very heavy focus on usability and user friendliness and stuff like that. There's different sort of schools of thought of, like, functional programming languages. So,
I would say that, like, Haskell is very focused on, like, mathematics, or at least, like, it culturally feels that way. Maybe different people would disagree with that, but... And I would say, like, Clojure is a very, like, you know, it's all about Lisp, and, like, macros and, like, these particular set of primitives that are not necessarily required for functional programming, but, like, fit together in interesting ways with functional programming. And, like, Elm and Roc are very much, like, typed, purely functional, very focused on, like, having a small set of simple language primitives that work well together, and then nice compiler error messages and ergonomics and stuff like that. I would say we're, on the tooling side,
we're drawing a lot of inspiration from Go, where we're like, we have the test runner built in, we have the formatter built in. We wanna make it so, you know, you download the Roc binary, and then you can just go. You don't need to, you know, pick a bunch of things off the shelf, you know, to get things that... Everybody agrees you should have a testing system, but you don't need to go pick one off the shelf. It's like, it's there. It's right there, built in. And have you taken the same tooling of Go? I mean, have you taken the same decisions around things like testing with...or is it Rust? With Rust you, test inline. You have the test in the same file You can do that, yeah. Yeah. So, we do have inline tests. You just, it's, the keyword is
called "expect," so you can just, like, write your function, right below it, next line, expect whatever, and then you're done. Ah, super cool. Actually, I guess a nice example of ergonomics. This is always something I'd liked. Power Assert is the one that comes to mind, that I use, and I also, back in the day, I did a little bit of development with Groovy, and they had that built into their tester, and I always thought it was cool, is that when you run your tests in Roc, you can just write normal booleans. Like, you don't need to do, like, assert this or that. You just say, like, you know, expect X equals, you know, for X equals five, and that's it. And what it'll do is, if that test fails, is it'll show,
first of all, it'll print out the source code of the actual test that you wrote, and then also, any named variables that you had, it'll just tell you what their values were. So you don't have to go back and be like, "Oh, wait. What was this and that?" Just trying to give you...and we've also talked about maybe expanding that a little bit to tell you, like, what's on either side of the equals, or if you had, like, a less-than, you know, show you those things, because you might wanna know. Just try to give you the info that you want anyway, and don't make you go back, and, like, debug log the test. Yes, cool.
That's the first thing you usually do anyway, right? So might as well save you the trouble. I was always of the opinion... I don't write as much code as I used to anymore, has to be said, but I was always of the opinion if you use the debugger, you're failing somehow, but it was a...I come from a very, sort of, purist TDD kind of background, if you like, so…
Well, but regressions still happen. Yes, of course. Yes. So, is it out? Is Roc out now? I would say it's pre-release. So, we don't have a numbered version yet. You can download a nightly release. We're in the process of making a real website right now. Depending on when you watch this, maybe it'll be out. But right now, it, like, as of this exact moment, there's kind
of a placeholder website, that sort of describes the language, but it's very bare-bones. But now we've gotten to the point where it's useful for things. So, before this point, like, last year, I would say, like, "Well, you can try it out and play around with it, but it's not, you know, really that useful," but now it is useful. I would say it's useful, but very immature and early,
and there are bugs and stuff like that, but you can, like, build stuff with it for real now. And now that we're at that point, we're like, "Okay, now we need a real website, and," you know, so it's ready to be used by early adopters who aren't afraid to sort of roll up their sleeves with a new technology. But, like, I have a lot of fondness for my time at the beginning of Elm, because, on the one hand, when you have a small set of people using the technology, yes, there's sharp edges and bugs and stuff, and the ecosystem's not there yet. But on the other hand, you know, I used to work with Bill Venners, who made ScalaTest, and I remember thinking, "How could you have made something that's used by so many people?" and I asked him about that, and he's like, "Oh, that's very easy. Back then, there was no testing thing, so I made one." And that's how it is in the early stages of a language. Somebody's gotta be the first person to write whatever X is
for that particular, you know, use case. My career goes back through, you know, before, my programming career goes back through before Java, essentially, and that sort of completely changed my life, right? So, when Java came out, and the internet, essentially, well, the World Wide Web, and Java really sort of changed pretty much the way, I think, many programmers went about their job. But the interesting thing with that, and especially in Thoughtworks, is everything was a first. You know, everything you were doing was a first, in a lot of ways. The kind of, the testing frameworks were a first. The continuous integration servers were a first. The, you know, acceptance testing frameworks, like Selenium and these, they were a first. All
these sort of things were, the innovations that were happening were because people were facing, were hitting these issues, and then kind of trying to come up with a way of solving a problem that they were experiencing on a day-to-day basis. I do sort of wonder now, are we still seeing that, or are all these sort of solved problems now? It's just when we have a new thing, new language, say, like Roc, we need to create the test runner for them, you know, and there's someone who's gonna be the first person to do that. There's someone who's gonna be the first person to do X, rather than it being... Or, another example would be things like machine learning, you know, applying engineering discipline to machine learning. So, you know, there was a period, not so long ago, where the idea that you might version control your model was, like, a crazy idea. Why would you think about doing... But that's now a kind of normal thing, so things are repeatable and so on. Is this a case of, sort of, we're applying, I guess,
a set of tested and known patterns to the new things? Is that a kind of… I'd say it's a mix. So, an example that comes to mind is, so, in Roc, we have a, as far as I know, unique, I don't know of any other language that does it this way, approach to serialization and deserialization. So, two different ways that this is, like, commonly done today... So, there's, like, the JavaScript way, the Ruby way, where you get some JSON in, and you just say, like, JSON.parse(), and it's like, cool. Now you have a JavaScript object. And of course, the downside of this is, you know, you get partway through your program... Cool, now you have a JavaScript way What if the JSON doesn't match what you thought it was gonna match? You're gonna find out about that eventually, but it might be pretty distant from where that original problem happened. So, that's one way of doing things. Another way of doing things is, I'm thinking of Rust, but, I mean, I know in Java, you can do it the same way, where you have a schema up front, and you say... So, this would be, like, Jackson in Java. So, you say,
"Here is exactly what I expect it to look like, and, you know, come and parse the JSON, and if it doesn't match that, fail right away, right there." So, that, in terms of, you know, how easy it is to debug later, I would say that's easier to debug later. But a downside of that is that you do need to actually write out the whole schema, and, you know, sort of keep it in sync with your program, and so forth. So, something we've introduced in Roc, that as far as I know is novel, is that we kind of have both. So, you can write at the same time... So, you write the equivalent of, like, JSON.parse(), and it does just, you don't have to write a schema, but what it does is it uses type inference to infer the type that you're parsing into, and based on how it's using the rest of the program. And so it actually will decode it right there at the call site, and if it doesn't
match how you're going to be using it throughout the rest of the program, it fails right away. That's super interesting. Yeah. Now, what's interesting about that is that that's not specific to JSON. It's something that's just, like, we call it, you know, "decoding" is the general term for it. So, in order to make it work for, let's say, JSON, somebody needs to write
a particular, like, JSON-aware parser, that works with this framework, so that it can, you know, translate between JSON and Roc values. So, on the one hand, you could look at that and say, "Well, this is just somebody needs to write a JSON parser for Roc." But on the other hand, structurally, it's different from how it's done in other languages. It's not like you're just translating it into a normal JavaScript object. Is there a TypeScript library called io-ts or something like that? I've heard of this, yeah. I believe that that works like it works in Java, and in Elm and Rust, where you do make a schema, and, you know, somehow you define, in code, like, you write some code that, you know, does this. I assume,
I don't know for sure, but I assume that you either write it by hand or you run some code that generates it or something like that. But as far as I know, in TypeScript, it's either you do that, or else you'd just say JSON.parse(), and, you know, that part's just not type-checked. Yes, right. But, yeah. But the point being, like, you know, if you're writing this, it's like, you're doing it in a different way than has been done before.
But on the other hand, it is still just, you know, for JSON, for XML, for CSV, whatever. It's good. We're talking about functional programming languages, and we've finally got to the point where something's a bit monad-like. Which is good, right? Because that is interesting,
right? That's why I found it interesting about TypeScript, and you're parsing stuff over the wire, and you've got this lovely type safety within the environment you're working in, which is the front end. But, as you say, like, you could be sent garbage that is essentially, you've got no way of knowing until you try and parse it, decode it, whatever. So, I kind of like the idea that actually there's maybe an attempt to solve some of those problems, where you're actually being type safe across the entire, I guess, back, front end, etc. And across the wire. And one thing I do... I did a lot of integration, a lot of XML parsing in my day. And, you know,
we used to use XML. We used to... What was it called? XPath, that was the thing. Oh, yes. I remember that. Where, rather than do the, kind of, like, take the schema, basically have a client that's generated from the schema, and you kind of, you know, when you receive a message, you turn that into the object, and if it doesn't match the schema, you blow up. You'd say, instead of that, you'd use XPath to just pick out, and Schematron, actually, was the thing, you pick out just the bits from the message that you wanted, and therefore, you would know if...you were insulated from changes to the schema, if you like. So, you know, if someone changed the schema, you weren't just suddenly gonna blow up.
Because this is the main problem, right? I mean, how do you avoid that issue, of, essentially just falling over in a heap if the thing that turns up isn't what you were expecting? So, if it doesn't conform to...If you can't decode it, right? Do you just blow up, and just, like, sorry, we're done? Well, the default is, I mean, it's not, like, throwing an exception, it's just, like, you get back a value that says either it succeeded, and here's your answer, or it failed, and then here's, you know, the error that it failed with, such as, like, you know, this field is missing or something like that. So, recovery is sort of up to you as the application author. It's not, you know... I don't think there's a one-size-fits-all way to recover from data being missing. Which, it's the compile time versus runtime checking of these things, right? So, that's what we used to do. We used to do it at build time. So, we'll generate a library based off a schema, and then that library's gonna be quite fragile in the face of changes elsewhere, if you like, and you'd have to recompile your application if someone's schema changed somewhere, which is, like, suck.
Now, having said that, if you want to write something that is more flexible at runtime, like, you can say, well, it's okay, if this field is missing, I wanna default to this or that. You can do that, but then at that point, you need to, at least in Roc's case, you would need to sort of, I'm gonna use the term eject, you know, like, translate the automatic thing that's happening into, like, an actual, like, written-out schema, like a decoder that you can then customize. So, this is how we do it in Elm, is, like, it's always done that way, which makes it very easy to customize. Another nice thing about that is, if you have it all written out, that it means that if you wanna change your variable names or something like that, you can do that without worrying that you're accidentally causing a regression in the decoding, which, you know, hopefully, a test catches, but it might not. But then again, there's another trade-off there,
which is that when you have it all written out, it becomes a little bit more brittle to internal changes. Like, so if I need to, like, you know, add a field somewhere that happens to be in a data structure that's used quite often throughout this thing, I have to go through and change it in a bunch of different places. And so, certain things, like being synchronized, either can be a source of bugs or can be a source of convenience, and it's just an innate trade-off. But yeah, if you do sort of eject the decoder, and have it all written out, then you can be a lot more flexible in terms of, if the runtime value is this or that, or this is missing but that's not, or, you know, I can say, well, I'll accept any of these three names here, and I'll just internally convert them to the same thing. So, a lot more flexibility if you go that route. I feel like we've gone quite deep into some random part of the language, which is, like, parsing responses. But let's maybe chunk it up a bit. So, what are you excited about in terms of features?
For Roc? For Roc, yeah. Great question. I mean, that is, to be fair, one of the things I'm excited about. So, in general, like, it's 100% type inference, so you can, you know, you don't need to write any type annotations if you don't want to. I mentioned that, like, you know, it's fast, friendly, functional. So, in terms of fast, the thing that I'm excited about, there are two parts to that, one is really fast compile time. So, we've spent a lot of time doing that. We still have a number of projects to go, but one of the things that, I mean, you mentioned, like, TDD earlier, one of my hypotheses for why there's a really strong testing culture in Ruby, like, for example, and I think in Python also, more so than I've seen in, like, type-checked languages, I think part of the reason for that is that you get a really fast feedback loop when you have a dynamic language for two reasons. One
is that there's no compile step. So, we wanna just make our compiler so fast that you don't notice it. But the other part of that is that, from a workflows perspective, if I am writing a test in Ruby, or let's say I've got a bunch of tests, and I'm refactoring something, all my tests go red, because, you know, I've changed this thing. Okay, fine. Well, I can go and fix them one at a time. I can go, like, change my implementation, fix whatever, and then they go green one at a time. Now, in a type-checked language, the norm, today, is that I make my changes and I get a bunch of type errors, and all of my tests are not runnable anymore, until I fix every single one of the type errors. So, the whole, like, make the tests green one at a time by fixing implementation details,
that workflow is inaccessible until you've fixed every single one of the type errors. But quite often, I don't wanna do that. I wanna go through and, like, you know, change the behavior one at a time, and make sure that the new behavior actually passes all the tests. And then maybe there's still some leftover type errors because I changed the interface, but those are just going through and updating, you know, callers to do the new thing. In isolation, I still just wanna just do this thing. Or, let's say I'm trying something out because I think the new implementation will have better performance, or I'm trying something out and I just wanna see how it feels to use it. Again, I don't wanna have to go and fix every single implication of that.
So, this gets me to another thing that I'm excited about, which is that we've designed the compiler, it doesn't 100% work this way yet, but we've at least designed it, and, you know, will get to a point where this does work this way, where, the compiler always type checks your code, and always tells you about problems, but they don't block you. So you can still run even, if it's got type errors or naming errors or whatever. So, the idea is that, much like a dynamic language, you still have those workflows available. So, I wanna get that same experience. This is always something I missed when going from dynamic to statically-typed languages, is that workflow of, like, I can always run my tests, no matter what's going on. And I can see which ones fail, and, you know, if they have a type mismatch, fine, that's a failure. Failed test. But only if that affects that test. If the type mismatch is some distant part of the codebase, I don't wanna
see that. Don't block me. Just let me run these tests, and I'll come back to that later. So, that requires sort of building the whole compiler with that in mind. And when I say it'
2024-02-24 23:53