Tech Radar Vol 28 — Western Preview
BRANDON COOK: Welcome, everyone, to the Technology Radar sneak peek webinar. I just want to welcome everybody here. I'm Brandon Cook.
I'm a principal engineer at Thoughtworks, and I'm accompanied by TWO folks from the Technology Advisory Board who put together the Technology Radar, Eric and Marisa. You guys want to introduce yourselves? Hi, I'm Erik. I'm a software engineer and I'm also head of technology at Thoughtworks. I've been in the Technology Advisory Board for a while, and maybe a few years ago you might have even seen me present one of these sessions. And Hi, I'm Marisa.
I'm a software developer at Thoughtworks, and my current role is as a technical assistant to the CTO of Thoughtworks Global. And this is my second Radar that I'm a part of, but in my role I also serve as the product owner for the Radar. So I do a lot of the behind the scenes stuff. BRANDON COOK: And yeah, a big Thanks to Marisa for stepping in. Camilla, who was originally supposed to be our speaker, couldn't join because she fell ill. So thanks to Marisa, again, for stepping in.
Just to give a quick introduction to the Radar itself, the Tech Radar is a collection of tools, languages, frameworks, platforms, and techniques that we use and leverage across a lot of our clients across the globe. And this group, the Technology Advisory Board, meets, and discusses, and has strong opinions about these various tools and sort of creates a Radar for us to have guidance around these different forward-thinking tools and techniques in the industry. I've seen the Radar used in different organizations to help with some governance models as well as help with tool selection and technique selection. And that can really be a magnificent resource for organizations in their technology decisions.
So I'll hand it over to Erik to kick us off on the first blip. OK. If we were not in the webinar, if I could see you, if we were at a conference, I would ask you to raise your hands if you like your password manager. And I guess most of you maybe have one reason or another to not like the specific password manager.
But also, the whole model, I think we've seen, isn't really that user friendly. And I guess, depending on where you are, you've seen different workarounds. I personally have-- I don't know-- a number of things in the authenticator application with a six-digit multi-factor authentication code on my mobile phone. I have multiple password managers.
I actually have one of these-- it's not advertising I hope-- little security keys and all sorts of things. And it is not really a great state, I would say, we're in. And at the same time, I guess most of us, at least I hope the audience of this call, is aware that we shouldn't use the same username and password on all the websites, and we need something to become more secure.
And as I said, I've listed already a number of methods that people try to devise to make logging into websites and applications more secure. And why I'm excited about this one, Passkets, which we're featuring in assess on the Radar, is that it really is approaching this whole problem space in a new way from multiple directions. So in one way it is similar to multifactor authentication, in that it actually has two different things. There is a secret-- and I'll explain in the second how that works-- but it also combines it with biometrics or at a minimum a pin. At the same time, it also takes some of the benefits of other solutions, in that you can't do repeat attacks, you can't do password stuffing, you can't sniff the credentials.
Even if you would get some of the secret keys of a device, you would still have to have the second factor and so on. Why I think there's a good chance, and I think that's also-- I actually know-- the consensus of the group when we put the Radar together why we think this has a good chance of succeeding is that now three large organizations-- Google, Apple, and Microsoft-- have gotten together to actually shepherd this standard. It is a separate standard by the FIDO Alliance, but the three major players in the industry that make many of the computing devices we use, the operating systems on them, they actually are behind this. So this means it is actually bringing us a lot of the benefits we wanted from the existing solutions today but in a single standard and a standard that has the backing of the big players in the industry.
So then, how does it work? Basically, the idea is when you're creating an account, it creates a passkey. And it is like, if you know cryptography, an asymmetric scheme. So the website gets the public key, and you, on your device, you retain the private key. And then, of course, if you log out, if you log in again, the website can provide a challenge to you, and then only you have the private key and actually can respond to the challenge so the website knows it's you.
That's good. The multifactor comes in that the standard protects the private key on your device by biometrics or, at an absolute minimum, by a pin code. That, of course, means that even if the attacker gets one part, they still can't do anything. The support is now relatively widespread in the Apple ecosystem.
It's stored in the iCloud keychain. And here's an interesting thing-- they, of course, are quite aware that if you lose these private keys you have a big problem. You generally don't have a reset mechanism. So Apple, for example, forces you to use the cloud-based version. If you only have local keychains, you can't use it.
It must be backed up into iCloud. Google Password Manager is similar. So if you're on an Android device, it's similar. On Windows, it uses the thing called Hello, which, is again, the same mechanism that is built into Windows operating system. In most cases you need a relatively new version. So on macOS, you need the current version of macOS, the current version of iOS.
On Windows it's even worse, you need windows 11. It does work in Chrome, but you need windows 11 to store the keys. And that could seem like a huge turnoff because a lot of people are on Windows 10 and it wouldn't work for them. But the good news is that the standard allows you to use multiple devices together. So what can happen is you can be on a Windows 10 device and use, say, a browser like Firefox or even older version of Edge that can definitely not store the secret. But the good news is there's a system then where you can use your mobile phone, and the website will display a QR code, and then the class key is stored on your mobile phone.
So that means even on Windows 10 it would work. Or even the case, I could create an account on Windows 10, store the key via the QR code on my mobile phone, then the mobile phone sends with the MacBook that I'm sitting on. I go to the same website, and then I can log in to the website with the MacBook because by now the secret has arrived on the MacBook. And still, on each device, I need biometrics to actually be able to access the key, so I would have to use the fingerprint reader or something else on the Mac to actually log in. The standard, as I said, it works reasonably well.
And it is embedded in all the large big tech ecosystems. But at the same time, there are still some pitfalls, and we will see whether people will get over them. For example, I mentioned that you can use the device that is nearby with the QR code, but as a measure to increase security, they also use Bluetooth low energy to make sure the devices are really close to each other. And that is tripping up a lot of people, for example. So maybe the Bluetooth is turned off, and then they think the system doesn't work because nobody really tells it. It is way more user friendly than many systems that came before it.
It does have wider industry backing. But the reason why it is in assess is that we are not seeing widespread adoption yet, and there are still some pitfalls. And we have to see whether this standard got it right with the balance between being very secure, which we believe it is, and being very user friendly. And here we have to see how user friendly. Is it user friendly enough to actually get enough of a mass adoption? But from an implementation perspective, it is definitely usable.
And we know-- and this is also obviously published, there's a number of large websites that are actually using it. So I would say if you are in a B2C context where you have a lot of users, it might be an idea for you to try an experiment and offer this as an alternative. When the Radar is out, there will be in the blip text a link to one of the demo sites. Some of the demo sites are broken, but one of them works really well.
And even though you can't delete accounts on the site, you can also use a bogus email address. So you can just play around with it and create an account. I would really encourage you, especially as a developer, to try to play around with this and see how the experience is and whether that's the right experience for your user base. MARISA HOENIG: Cool. I can cover the next blip which is dependency pruning.
So this is a blip that we put in techniques for adopt. And while it's listed as new on the Radar, it's not really a new technique in the industry. But we want to call attention to it because it is an important practice to keep your security up to date. So what do we mean by dependency pruning? This is the practice of periodically taking a look at your dependencies and pruning or getting rid of any that aren't used.
This is particularly noteworthy when you're dealing with starter kits or templates which bring in a lot of dependencies that you may not need. For me, I'm thinking of maybe a frontend application where I'm using Create React app. You use that, and then you have a ton of dependencies that maybe aren't relevant to your project.
So it's important to periodically go back and delete any that aren't being used. And that's because it really helps reduce the build and deploy time. And it also removes vulnerability so you have a less chance of getting attacked for anyone who maybe can get into those vulnerabilities.
In terms of how you actually do this, It really depends on your tech stack. I think each language has different tools available to figure out which dependencies are extraneous or aren't used. So if you're on an MPM project, you can use MPM prune. I know that goes through and we'll take out any that aren't used.
Or maybe if you're on a Python project and using pip, you can do something called pip check, and that will tell you any that you don't need anymore. And then I think you can manually go through and remove them. So again, this isn't really a new technique, but we wanted to call it out because we do think it is so important. And you should periodically go back and remove dependencies that aren't being used. Yeah.
Another one that isn't really a new technique but that we felt was worth calling out is to build demo frontends for API-only products. And where do I start with this one? So APIs are a term that are used everywhere in all sorts of contexts and oftentimes, as the group who writes the Radar, we have stumbled across this on a number of occasions. Oftentimes, it is overused and misused. Just a general HTTP endpoint that returns JSON is often call it an API. All sorts of things are called API when they're not really, in our understanding, APIs-- not something that is meant to be created by one group, by one team, and then to be used by at least one other but usually a number of other downstream teams.
So for the typical case that we often see when we write applications today, we often have these applications in halves, right? I mean, there's something running on the server, and there's something running in the web browser, and they often communicate via HTTP and JSON or maybe a binary format. That is not what we would call an API. That is-- I don't know-- it's actually an implementation detail.
And there have been a number of entries on the Tech Radar over the years that describe this and describe that maybe even different design patterns are worthwhile considering in such a scenario. But for the case where a team is really designing, usually today in HTTP endpoint that search JSON or some other format, to be consumed by other people, for that understanding of an API, what we have found is that it's really difficult in a business context to get the business stakeholders to understand what is that API. I mean, as developers, it's easy for us to use a tool.
I don't know, say, Postman for example, or some browser plugin, or use curl on the command line to actually see the API and to convince ourselves that it's working or familiarize ourselves with the API when we are on the consuming side. What we find, though, is for business people it's really quite difficult to understand this. And when you're building a small demo frontend, you can often show them, especially when the API is more an API that serves data or that triggers an action on the server, where you can show them a form that gives them understanding of what this API actually does. We've encountered this quite a bit also as we're implementing the data mesh pattern in organizations where you have these data products, and these data products are sometimes made available by an API. And then, for the people who are not software developers, this demo frontend makes it more tangible and also allows people often to make better decisions. And this is especially true because there are some I don't want to call it accepted wisdom, but there are certain slogans that have become popular over the last 10 years or so like the Amazon one where everything must be an API and so on.
And then business people start parroting that, and they keep saying this. But if you then show them something that really shouldn't be an API and if you give them even a bit of a demo of frontend for it, they'll see the excess of data or saying, that is not really something that we want to show anybody else. It's not going to be reused. And that can often help you tease out what is really part of an API, what is part of the product-- and the API should be considered a product with a roadmap, with understanding your users and so on, and what is just an implementation detail. And one more benefit we've found in implementing these demo APIs is that it gives the people who write the API, who create the API, a bit more empathy for using those APIs. This is particularly true when you're designing a public API that is going to be used by a large amount of people, like hundreds of different teams.
When you're building an API that is meant to be used by a number of different parties, it can often be the case that you focus on what is easy for you to implement in the API or speculating how people will use the API. But having to go through the pain of actually building a frontend for your own API often gives you the empathy and it lets you discover some of the rough edges or some of the points that don't work quite well together at an earlier stage. There is another benefit of actually building small demo frontends for API products.
But as I said, API products in a strict sense of something that is being used by several teams. Cool. We're going to do another technique. I promise we'll go into other quadrants in a minute. But this one is tracking health over debt, and this is a new blip in assess. And we put it in assess because we really think this is a cool way to reframe tech debt.
And I'm not sure if we've really used it extensively on projects, but we want to share it out with folks. So tech debt, I'm sure many folks are familiar, but tech debt can be defined as deficiencies in internal quality-- kind of make it harder to add new features or modify existing features. And so, when you're trying to build something new, you really need to pay back that debt on the previous code because you need to be able to build the new thing but maybe something is holding you back. So tech debt comes up a lot on teams, and you're often trying to balance when do we prioritize it, when do we tackle it, how do we do that-- all those kinds of things.
And there's an article by the folks at REA group in Australia where they talk about reframing tech debt instead as looking at your system health. And we thought this not only reframes it in a positive manner, but it also makes you look more strategically at your system and at the different maybe pillars of your software. So in this article, which we can share in the chat in a minute, they talk about what good software looks like, and they frame it in terms of development, operations, and architecture. And within each of those categories, they have a dozen or so requirements where they're looking at does our system fit these requirements.
So some of those are things like is the development feedback loop short, or are components loosely coupled-- looking at different parts of your software and just monitoring whether that's in good health, which in turn kind of reduces tech debt. This reframing is really important because it allows the team to instead focus on the value of reducing debt and prioritizing it. And then they're looking at the overall system and how that's impacted. I was trying to think about how do you actually do this-- how do you reframe your tech debt into system health? And I'm sure there's tools out there to use. The first thing that comes to mind for me is determining what metrics are really important for your team and your tech stack or maybe your organization and then creating a checklist of those things that you want to do. I think that article from REA is a good starting point to say, is this really important to us? Is this something that matters on our project.
And then once you have your checklist, figuring out how you can really measure that with existing tools or what new tools you really need to implement in your system to be able to track those things appropriately. So right now we're going to talk about more technology, about tools for frontend development. And I've been writing software for quite a while.
We constantly see a new set of tools appear, and many of the tools we discard for the Radar because they don't add anything new. But there are tools that kind of do the same thing as an existing tool but significantly better because there's a better understanding of the problem space. Maybe people make different conceptual approaches-- and I'll come back to that in a second-- or because the problem space has actually changed and requires new tooling.
One thing that has been on the Radar is number 63, Vite, which is a bundler. And I just checked it because I couldn't remember. It was in October 2021 we put it on the Radar in assess because we already saw the promise at the time. And in this Radar, we will actually move it into the trial ring because we have used it in production now in a number of cases, and the promises that came with Vite have really materialized for us. One thing that is really clear is that despite-- and I've been saying this in different contexts, but I want to repeat it again-- almost every tool today comes on their website with a description that says blazingly fast. And that is oftentimes, I know, just a subjective statement.
It treats dependencies differently from the code you write. And also, what it really excels at in its conceptual approach because it uses ES modules, it allows the browser to only reload the things that actually need to change. In the older approaches for bundles, everything, all the dependencies and all your code, had to be rebundled each time you made a change to your source code. And, of course, that takes time. Which is faster with Vite.
It actually builds on another tool called ESLint. That is the one that I've talked about before that is actually doing the work that is written in Go. It is faster that way, but it is also faster because it can treat the dependencies differently so the browser doesn't have to reload them. In the end, what you get is what is known, if you're not familiar with the terms, hot module reloading.
Which means as a developer, you can have one window open in which you change your source code. You basically leave the window, it autosaves, and the web browser updates automatically within a fraction of a second. And that is exactly the developer experience we want.
This is the same idea that we want with fast feedback in testing, development, and so on. And I will tell you, as a software developer myself, 15 seconds is probably enough to impact your flow a little bit. When you have to wait for 15 seconds for something to change, your mind wanders off already. If you have larger breaks, like 30 seconds or a minute, it can become disastrous.
So I would say every minute-- sort, every minute. Every second you can shave off this process is something that really significantly improves the developer experience in that case. And that's what Vite does. There's a couple of other nice things that Vite does that I guess you will notice if you start using it.
From us, it definitely becomes a recommendation as a tool to use. A few years ago, there were still some concerns about Vite not providing polyfills for some older systems. But today, in general, it is something you can use for most applications I would say. And then, what is new or will be new on the Radar that comes out is now Vitest. Of course.
But Jest and we don't play very well together. So what we encountered is that people wanted Vite to actually get that fast hot module replacement, to get that fast reload experience, and playing around with it. But then, every now and then, you want to run your tests. But the test pipeline in Vite didn't work with Jest So teams-- and I saw this-- had to maintain basically more or less two different pipelines-- one for the fast developer experience and a separate pipeline for running the test with Jest. And there were some tricks, and some shortcuts, and so on.
But in the end, it was arduous and tedious because you need a different configuration. If you made a change, you had to make them in both and so on. So it wasn't really a nice state of affairs. So we were really happy to see and be, as you can see also on the slide, Vitest will appear immediately in trial. We've used it and we really like it in conjunction with Vite. The team behind the Vitest also did something else.
Because they're also aware of Jest, obviously they're providing Jest compatible APIs. So in theory, if you're not using Vite, if you're using Jest and some other bundle or some other pipeline, you could also use Vitest instead of Jest. It is a drop-in replacement.
In our experience, Vitest really shines when used together with Vite. And all those claims about the Vitest now being blazingly fast, but Jest also says they're blazingly fast, and so on. So we haven't seen that much of a value of using Vitest as a drop-in replacement for Jest even though you can do it. And in our experience, from the teams that tried it, they said it wasn't necessarily faster than using Jest.
But that's not the point. The real point is if you were skeptical about switching to Vite because you knew about the problems integrating Vite with Jest and you were using Jest, there's now one reason less to actually stay away from Vite because Vite has now Vitest, and those together solve the problem. So you get all the goodness-- the benefits of the developing experience are Vite-- but you don't have to worry about integrating with your testing framework anymore.
So GitHub Actions, I'm super excited about this blip. We're putting GitHub actions in adopt this time around. And I'm really excited because a couple of years ago I actually proposed it for adopt, and that got shot down because it was way too early. But we featured GitHub Actions on the Radar three times before this one.
So back in April of 2021 it was in assess. It moved to trial in October of 2021. It was on trial for a while, and now it's in adopt.
So I know I've personally used Actions several times. We have many teams who use GitHub Actions, especially when you're in a Greenfield environment, it's really easy to quickly set up CI/CD pipeline. And I always like being able to see the pipeline and see the code for the pipeline right with all my other code in my GitHub repository. You also have the capability to run the pipeline locally if you need to or you want to do testing. There's open source tools like ACT, which A-C-T is how you spell that, and you can use that to run it locally. And like other CI/CD tools, you can take on more complex workflows now and also call other actions with composite actions.
So I think, over the years, we've seen it slowly develop more and get some of these features that we're used to from other CI/CD tools. So it's really making its way in the industry. And a theme for some of my next few blips is urging for some caution of security.
So one thing here to pay attention to with GitHub Actions is that there are many different third-party GitHub Actions available in the marketplace that you can use in your pipeline. But when you use them, you really need to be cautious about which ones they are-- are they safe, are they by maybe reputable organizations or used by a lot of different people-- just because there can be security vulnerabilities or issues there. GitHub also has advice on how to make that better. There's some advice on security hardening. There's advice on how to build your pipelines in a more secure way.
So just make sure to pay attention to that when you're using GitHub Actions, which I think is kind of an advice for any CI/CD tool. And then, with that, I also want to share this other blip which is related called CI/CD infrastructure as a service. And this is a technique that we're putting in adopt as a new blip. Which, by the way, usually putting new blips in adopt is kind of crazy.
I realize I actually talked about a couple of them that did that. So sometimes it happens, but usually that's pretty rare. So that means we feel pretty strongly about it. But these days, there are so many different options for CI/CD infrastructure as a service. But it's becoming really rare for it to make sense to manage your entire CI infrastructure yourself. Instead, you can use managed services like GitHub Actions or Azure DevOps, GitLab, CI/CD, and I'm sure there's several other tools out there.
But this technique is good because it allows you to not need to spend as much time, effort, or hardware costs on maintenance or operations of your infrastructure, and you can also self service to provision your own agents, instead of manually provisioning them if you're maybe building up your own infrastructure. As I mentioned, lots of security cautions here Again. But it's important to always be mindful of security when you're using these tools because you aren't always getting security out of the box, and you're not just using maybe your organization's infrastructure or infrastructure that you are provisioning. So just be careful of that, and make sure you're reading up on how to do that and how to make sure it's super secure. And Erik, I think you also had a note on another blip that's related here. Yeah.
It's not on the slide. But we'll find out what it's going to be called on the Tech Radar, but some of the tools often have really fanciful names, really short. Our colleague, [INAUDIBLE] said it looks like a set of indie bands if you look at the Tech Radar and the tools squadron because they're all these names. But some of the larger organizations were happily now also embracing open source software. They stick with more corporate naming.
And there's one thing called the Philips Labs Terraform AWS GitHub Runner. And it doesn't have a shorter name. That's the name of the GitHub repository. Phillips being the large I think originally Dutch company, right? So Philips Labs, Terraform AWS GitHub Runner-- and that is exactly what Marisa mentioned-- this is a tool that allows you, when you have CI/CD servers hosted, to actually run a specific runner in your own infrastructure. And it really helps you with Terraform to set up these runners in AWS.
It uses lambda functions and so on, so there's more that's going to be on the Tech Radar that will provide insights to help you with making the transition maybe away from having your own CI server running on your own infrastructure. But I think I wanted to talk about something else. And that's something completely different yet again. Number 96, Ferrocene.
I've personally been following Rust very closely, and we have looked at Rust on the Tech Radar as a programming language. You could also play a little bit of bingo maybe on each radar-- if you see a new tool pop up, which of them are implemented in Rust. So Rust is, of course, a programming language, and it's a language that I predict we won't see very much in writing microservices for example. However, Rust does have a number of niches, and one of them, of course, is writing tools that are really fast. And we've seen lots and lots of different tools that have been featured on the Tech Radar in the past years that are these days written in Rust and not C as a programming language. And it's really proving its worth there.
One thing that as Thoughtworks-- and I'm based in Germany. And, of course, Germany has a sizeable automotive industry. So something that we are seeing is that the software in cars is also getting increasingly complicated. We're talking about tens of million lines of code, and they're usually written in C, C++, or sometimes in languages like Matlab which are then transpired into C. And these are systems that are safety critical.
In our opinion and our experience, Rust would be an excellent choice here because many of the problems that we are seeing from a security perspective in safety critical systems exist because of design choices or accidental choices that C as a programming language made. To give you an example, in the highest level-- and there's an ISO standard for it, and in the ISO it's 26262 by the way-- in the D level, you're not allowed to do pass by reference. When you're passing a pointer to a function, you're not allowed to do this because there are so many associated problems. So as you're passing a pointer to the function, but then the original memory location gets deallocated, and then you have a dangling pointer which is catastrophic in safety critical systems. So Rust solves many of those problems out of the box. It also doesn't have a garbage collector which is often cited as a big problem.
I would argue it's sometimes overstated how big a problem garbage collector really is, but Rust doesn't have a garbage collector [INAUDIBLE] definitely [INAUDIBLE] also a plus. But one thing that has really hindered the adoption of the Rust programming language in safety-critical context is that it has no certified toolchain. And that means for organizations that simply not whether it's technically possible or not, they're not legally allowed to use Rust in such a context.
And after many years of work behind the scenes, there's a number of organizations now that work on Ferrocene. Thoughtworks is somewhat involved. And the idea of Ferrocene is to get the Rust toolchain, the compiler toolchain, ISO certified so you can actually use it in a safety-critical environment.
What has prompted us to list it on the Radar now is a relatively recent announcement-- actually two announcements. One is there is a working group for one of the big consortia in the automotive industry. They are now officially looking at Rust as a programming language. But also, a company called AdaCore-- Ada being a programming language.
I don't know how many of them. This is also one of those C replacement languages. It is very popular in the aeronautical, in the aerospace industry, not so much in automotive. But a company that has certified Ada compilers is now also getting on board. And they will help.
And the assumption is that by the end of the year, a version of the Rust compiler, most likely 1.68, will be certified for safety-critical use. Which would then mean we can reap all the benefits from a more modern programming language like Rust in environments that actually benefit from the modern programming language. I'm excited about this, as you can tell.
Yeah. I love the excitement. The last blip I'm going to go through is chatGPT. Which, believe it or not, it almost didn't make it on the Radar in our discussions. This was a last-minute we should probably list chatGPT and here's why. But we got tons of proposals related to chatGPT, whether it's other blips that utilize it or chatGPT itself.
So we're listing this in assess this time because we don't have enough production experience to really move it to trial. It's also super new. I know there's been a ton of hype about chatGPT, but believe it or not it's been around for less than five months which I find astounding.
But chatGPT can really be used for many different parts of the software creation process, and it can be used for other things by other roles beyond software developers. I won't get into that as much, but there are tons of different ways you can use it. What is it? In case you don't know, it's a large language model that's been processed-- sorry, that's processed billions of web pages and can provide any number of answers and perspectives based on what you ask it.
I've had colleagues asking for things that are related to code or not related to code-- related to planning a trip or whatever. So there's use cases outside of software development too. Here at Thoughtworks, we like to say that it has knowledge but not wisdom. So it knows tons of different things, but it can't necessarily always be correct and really have the wisdom that a software developer brings to the table when you are coding something or working on a project. So we do think that it's best used instead as an input to a process, like maybe making a boilerplate for a coding task, rather than really being seen as fully baked results and just copy and pasting things from chatGPT right into your repo and then let's go, let's deploy, all of that. Instead, really taking it from chatGPT-- sure, maybe you copy and pasted it at first, but then you figure out how that works and finagle some things, change things up, and really make it work for your use case.
It's how I use Stack Overflow sometimes. You'll paste some stuff in, try it out, change things up-- things like that. It's worth noting that GPT-4 now can integrate into external tools, such as knowledge management repositories, sandbox coding environments, or even web search. So there's a lot of potential. And those are just three of probably many that already exist. And within the next few months-- this is still so new-- I'm sure there'll be tons of more integrations that you can do.
And, of course, there are some concerns around chatGPT. I have to mention that. In terms of intellectual property, you really need to be really careful about what you're putting into the model. Because you don't want to share maybe your company secrets or anything else that should not be shared. And also, data privacy, which goes hand in hand with that. So we really recommend that if you're using chatGPT or thinking of using it on a project or within your company, you should consult your legal teams and just make sure that's OK.
Make sure they're aware that you're using it and that they know the different concerns and issues involved with that. Like I mentioned, we have many adjacent chatGPT blips. I actually thought we didn't have that many, and then I was looking through and we have at least five if not more. So when we do release the Radar, my task for everyone is to go through and do an Easter egg hunt of what other blips are related to chatGPT. But we have things like prompt engineering, domain-specific LLMs, self-host LLMs, nanoGPT.
And probably my favorite AI-aided test-first development. if this sounds different, it's because I think we made up the name. But this is a new technique that we're going to talk about on the Radar. And if you want a little preview of what's really involved there, Martin Fowler, who's one of the members who helps build the Radar-- many of you probably know who he is-- he wrote an article recently on LLM prompting which is really the-AI aided test-first development. And this was through a conversation with our head of tech in China, Xu Hao.
And he goes through how to use chatGPT to produce useful self-tested code and using that as an input into your system. So priming chatGPT with an implementation strategy and then asking for a plan from the AI, and then you use that to then generate code and kind of work hand in hand with chatGPT. So I know Gareth just posted the article in the chat.
Check that out if you haven't already and look forward to a bunch of chatGPT-related blips and generative AI-related blips on the radar. And maybe, Marisa, to add one thing, you mentioned already the name of the blip is a bit contorted. But one thing we roundly rejected was a proposal to put chatGPT pairing or pairing with chatGPT on the Radar. Because as you said Marisa, this is not about wisdom.
It's about knowledge. And we assume that when we are pair programming-- we do this almost exclusively in Thoughtworks, pair programming-- we expect the pair to have some wisdom and really creatively talk to each other. And we felt that is something that chatGPT definitely cannot do at this stage. It is a useful tool we felt, but we did not want to call this pairing. As pair programming, for us, is still something quite different.
And we are certain that what we are seeing today in these LLMs is not going to replace the second person in a pair when you practice pair programming. So we don't have a list of things that we don't put on the Radar, but I did want to highlight this. Because it came up in the discussion, it was proposed by our colleagues, and I've seen it a lot on the web these days that people talk about AI pairing or something like this. But we did discuss it, and at least from our perspective, it is not pair programming that you're doing with an LLM. You're doing something else which can be useful-- I've used it too-- but it is not pair programming.
Thanks for having, Erik and Marisa. Very thorough in terms of explaining the blips. We do have some questions from folks here. I think maybe this one may be first directed at you Erik around demo frontends with APIs. It's talking about demo frontends and how they could be coupled with low/no-code UIs. Curious if you had any opinions on that.
It is not really going to win-- how do they say-- any beauty contests. You don't have to look for performance and so on. It is just whatever makes you productive in getting this frontend out.
And then, I guess, another extension on the demo frontend for APIs-- to what extent are workflows illustrated versus just request/response? What has been useful on projects? Now, that's a very good question. APIs that have complicated workflows don't probably lend themselves so well to this approach. Most of the time, when we see APIs-- it depends also on the overall, I would say, enterprise architecture on the architectural style. Ideally, APIs should be relatively-- and I'm saying ideally, I know this is not always possible-- ideally the API should be relatively self-contained and shouldn't build up too much state between the client, the consumer, and the producer, which is this conversational request-response pattern that we're seeing. So for those APIs that are more standalone, where you can make one call, maybe one setup call, a little bit of state, another call, and then get some data back, that works really well. If you have a system that really heavily relies on shared state between two players and there's an API involved, I would argue don't do what we describe there.
Don't build a demo frontend. But I would also say maybe there is a bit of a smell. Maybe this is a smell of a system that ends up being what is often known as a distributed monolith-- a system where two halves or three parts are too intertwined with each other. Which is why they need this request-response interaction and that state that is on both sides of the system. But yeah, the short answer is if you have complex request-response patterns, then that technique is probably not the one to use. If you have APIs that are more, I'll give you some information, I get something back, or I'll trigger an action and I get a status back, then the technique works really well.
Switching gears here more into the CI infrastructure as a service, over to you Marisa, I think. What's your opinion with respect to other tool usage like Jenkins or are there similar tools for CI? Yeah. I can cover a bit of this, but I'd also love if Erik can chime in. But most of my experience has been on greenfield projects where in most cases I've used GitHub Actions. So I've had some experience with Jenkins and other tools. I think one thing to note, though, is on the Radar we're putting GitHub Actions in adopt.
That doesn't mean that other CI tools aren't in adopt or aren't in trial and we no longer recommend them. Often, it really depends on your use case and what makes sense for your project. Or sometimes if you have an enterprise account with Jenkins or you have something set up already with GitHub Actions, I think that really depends. But yeah, from my experience I've mostly worked with GitHub actions, which, yeah, I'd love to hear, Erik, what your thoughts are on some of the other tools. I was just typing an answer to another question. Yeah.
I mean, we've seen lots of self-hosted systems. But as we are trying to express with this blip, it feels the time has come now to say, in most cases, goodbye to them and actually go with something that is hosted as a service. That means you can still do it. But really think, why are you doing it? Is it really for-- and Marisa talked a good bit about security-- are you really going to be so much more secure installing Jenkins yourself in a cloud environment then actually using GitHub Actions? Are you better-- I know this is too controversial. But really think of what you're really getting from running this yourself. And that's also why I wanted to highlight that Phillips thing because you can-- certain things that you need to run yourself-- and as I talked about the automotive industry, we often talk about hardware in the loop testing, which you definitely can't do in the cloud.
So you can sometimes host runners yourself. Our sense is really the industry is moving, for the right reasons, to having a CI systems as a service that is maintained by somebody else. And there's less and less reason to host them yourself.
Which means, yeah, maybe saying goodbye to Jenkins after all those years. And I think this one is probably back to you Erik around Rust. There's a lot of questions around Golang versus Rust, as well as they seem like they're hot languages, but it doesn't seem like there's a lot of demand in the job market. How do you see all the landscape of the Go versus Rust sort of systems engineering type focused languages? Yeah. I'm uncomfortable with the framing of Go versus Rust.
I think they came at a similar time, and you could even put-- I've done it sometimes just to be provocative-- Swift in there as well. I mean, companies were really unhappy with C and C++. And Apple decided they had Objective-C. They decided they wanted to do Swift for their purposes. Google had a lot of C on the servers.
They came out with Go. Mozilla, of course, was the company that-- or company, the organization that-- came out with Rust. I think there was this drive of getting away from C as a base language. But they have developed in different directions.
I mean, there's very few-- we listed it on the Radar several years ago. There's very little enthusiasm at the moment for using Swift for anything but iOS and macOS development. I mean, you could theoretically write microservices in Swift. There's a compiler. You can compile this on Linux and so on.
It doesn't really happen so much. And similarly, I think we've also seen Go being used more for microservices, more for tooling in the cloud-native architecture in that landscape. Whereas Rust is really taking the spot where C used to shine for really having command-line tools-- for having this kind of tooling and for larger things. I think what's also interesting to see, Go for a long time didn't have generics.
And the idea was we don't need generics. They're just making the language too complicated. We want to make Go very simple and don't introduce complicated language constructs. Which I think was a valid design approach for the programming language but made it harder to write large applications because you were lacking many of the abstraction mechanisms that arguably more powerful languages had. But it's OK if you're writing a small tool. If you're writing small microservices, you don't need a system that can have 100,000, 500,000, or a million lines of code.
So what I'm thinking is that in Rust where you have the more powerful language abstractions, you can tackle larger projects where there is a need to have large code bases and also code bases that need to live for a very long time. So I would argue Go and Rust both have their niches. They may sometimes be considered similar, but in fact, what we are seeing, they aren't. When it comes to the amount of jobs, I don't know. I can't comment.
It really depends on which area you're looking at and what the industry needs. I think there's probably more need for websites, and microservices, and e-commerce systems, or in financial systems than there is for tools for Kubernetes or software for cars. I think the next question could be for the both of you around chatGPT. There's a question of do you think chatGPT has become a bit of a hype-driven development pattern. Yeah, I can go first. I mean, yes, to an extent, of course.
I mean, there's been a lot of hype around it in the last five months. And I think part of it is justified. There's been a lot of improvements to it and different tools that it can integrate.
Lots of use cases. I know I see things all the time, whether it's articles or things on LinkedIn popping up, of people using it for all these different ways. So yeah, there's a lot of hype around it, and that's the way usually the tech industry works. There'll be something cool, and people will get behind it, and want to use it, and try to build it into their day-to-day work and projects.
Yeah. I'll stop there. Go ahead, Erik. No, I actually don't have much to add. There is a huge amount of hype.
I mean, what else can we say? And we are figuring out how much of the hype is actually justified. And we do believe there's more-- I mean, I say this personally now, not as we, although I think I can almost speak for the group. There's definitely substance in there somewhere. We weren't quite convinced about Web3, but in this one, which is reaching the same proportions of hype, maybe even more of hype, we feel there is something of substance in there. But are we worried that the AI is going to take all the programming jobs next week? I don't think we are.
As I said, we don't even believe that a chatGPT system can replace a partner in pair programming. But it definitely has some useful uses. I guess going in to that, there's another question around what's the opinion on GitHub Copilot and the new features that they've been launching from that perspective. Yeah, I can cover a little bit of this. We do have GitHub Copilot featured on this Radar too.
So we'll have both chatGPT and Copilot in tools assess. And I think our main point in the blip is-- well, there's two main points. One is that it's billed as your AI pair programmer, and I think Erik covered it before, where we really don't believe in that phrasing, where we don't think it's really a pair of programmer in the sense that we believe in pair programming and how you do it.
So I think that's one thing to think about as you're using Copilot. And the other thing is that it's worth trying all these tools. If you have the ability to use it, and you're thinking about the security concerns, and you're safe when using it-- not putting maybe your proprietary information in there-- why not try it out and just see if it works for you? There's a lot of people saying it makes you 10 times more productive or something like that. That's really hard to measure, but give it a try and see what it does for you. I think it's really going to depend on your use case and depend on how the tool continues to evolve over time. And again, it depends on what kind of programming problem you're throwing at it.
On the other hand, I mean, I recently tried to build something with SwiftUI which is relatively new-- I mean a couple of years old-- toolkit from Apple to write user interfaces in Swift. And the experience is much, much worse because there is simply not as much code out on the web. In fact, I used chatGPT also. It invented APIs that didn't exist because it was just not enough basis. So if you were working in a less mainstream field-- and, I mean, SwiftUI is not, should I say completely a niche area.
It's inspirational. You copy-paste some stuff, and then you tweak around with it, and it's still useful. It's probably a little less useful the further away you go from the mainstream. And then with the last few remaining minutes we have left, staying in the sort of generative AI realm and the LLM realm, I think there's a question on thoughts on vector databases. Was there any discussion on the Radar about any of the vector databases that are out there? I'm not an expert.
I think Marisa is shaking her head too. I would say, there is definitely-- we had a database for embeddings. We talked about feature [? source ?] are not the same. There's definitely a good bunch of databases on the track Radar and has consistently been over the past years.
So if you're interested in that field, it is worthwhile to go to the website. And I know it sounds a bit old-fashioned, and we'll fix it one day, I promise, go to the PDF version, and there's an archive of the old edition of the Tech Radar as PDF. And just go through it, and normally, if you see the Tech Radar in that full form, in the PDF, it's not more than 20-30 pages or so. And it's actually quite easy to skim through and see whether there's anything in there. And as I said, there were a number of different database technologies mentioned, some of which-- and we've also had experience with using LLMs, self-hosted LLMs, but also now, with the first time we [? can't ?] talk about it publicly, using GPT.
And so we do have that experience with the databases. But as I said, probably have to look it up on the older editions of the Tech Radar. And I think that goes in nicely.
I think there's a couple of questions on-- like I see some things dropped off the Radar. Does that mean anything, Erik? Did you want to maybe sort of explain how things stay on the Radar or versus drop off and all of that information? Yeah. The short answer, given the time, is they generally don't stay. I mean, the Radar is our report on what we noticed in the last six months.
And things generally return to the Radar-- like Vite, for example, we had it on the radar in October 2021. It was in assess, and we first noticed it, and had the first experience with it. And now it returned because we have something new to say. But that's the only reason why things really are on the Radar. We don't keep them around.
Radar is not meant to be a document that tells you this is exactly what you should use. The Radar really is, every six months, what has Thoughtworks encountered in the past six months? What have we learned? And I spotted a question in here now in the Q&A section, has Cake as a build tool been considered. Yes, it has. It was on the Radar a few editions ago. But we can't-- and we discussed it so many times-- keep old editions updated.
And we cannot, just logistically, from a time perspective, go through all the old editions of the radar and update the advice. So sometimes you just have to live with Thoughtworks liked it three years ago. I don't know what that means today. Well, I think that we're just right on time. So thanks, everyone, for joining. Thank you, Erik, Marisa, for all the detailed information around the Radar.
I said, folks, stay tuned for when the Radar is published. It will probably be a valuable resource for you and your organizations. I know it's always a valuable and interesting read for me every six months or so to see the new trends out there. So thanks, everyone, for joining.