DevOps in Action with SAP Business Technology Platform (SAP BTP) | SAP TechEd in 2021
Hello, and welcome to our session DEV100, where you can witness DevOps in action with SAP Business Technology Platform. My name is Boris Zarske. I'm a product manager at SAP where I'm in charge of DevOps with SAP BTP. And I'm very happy to have my appreciated colleague Harald with me today. My name is Harald Stevens. I'm a product manager as well.
I'm responsible for transporting in the cloud. And now, Boris, let's get us started. Let's start with a very brief positioning. And it is rather straightforward, because due to the special role SAP Business Technology Platform plays for the Intelligent Enterprise, we enable you to apply DevOps principles, no matter if it's for a cloud-native application or an extension of the Intelligent Enterprise. What is it all about? In the past, we did pile up a lot of changes into large releases that then have been deployed maybe two times a year.
That resulted in first of all, we did invest in coming up with a new feature, which took maybe weeks or several months until it did reach the end user who could then benefit from it and provide feedback. Also you did pile up complexity, because there were more and more interdependencies to handle. Instead, the new approach is now to have much smaller changes that get deployed much more frequently.
That way, your investment into an increment does reach the end user much quicker and they can provide feedback. And you see how it is used and can react on that. And also deployment it's much easier, because there are less interdependencies to handle. And you get used to it because, you do it much more frequently. Why do we do that? Of course, in the end we all want to have happier customers.
I mean, that's true for SAP, of course, but also for you, if you want to come up with applications, for example for your lines of businesses. How can we achieve that? Of course, by providing better software. Meaning, by reacting faster on the feedback we receive.
And the baseline for that is that we come up with a faster deployment, which can only work out if you have quality built in. Otherwise this would all end in a huge mess. So, how could we now achieve this faster deployment with quality built in? For that, let's briefly take a look at our deployment pipeline. Meaning: you come up with a requirement, code it, build it, integrated it, perform tests, release it, deploy it to production - and operate it, which actually closes then the feedback loop. And now, the concept is to automate large parts of this deployment pipeline by a Continuous Integration server, where a corresponding pipeline is triggered whenever you come up with development changes. When we now take a look at a portfolio of depth of services coming from SAP Business Technology Platform, we have structured it along phases or building blocks of a bridge we want to come up with between the key representatives of your cross-functional team, meaning: developers and operators.
The first phase of this is 'plan and setup', where we provide, for example, best practices how you can run applications in the cloud. But the main part is actually setting up your pipeline that is then used in the 'development and test' phase. Meaning: your developer comes up with changes and then this triggers automatically the pipeline for automated build, test, and deployment and you can also directly react on the feedback then. Key part here of course is that you have a good test coverage. And that's not something you can come up overnight of course.
So my recommendation is to look into test driven development. And start simple and then evolve as you go and gain further experience. As soon as you came up with a qualified release candidate that is then propagated towards production in the 'deliver and change' phase. Either highly automated or with more control, which is of course required in enterprise environments quite often.
There, you can have that automated hand-over into transport and change management, such as for auditing reasons. And then you make sure your application is being provided with the right performance and availability in the 'monitor and operate' phase, where we provide means that you can see what is going on with your application on the platform, with monitoring capabilities. You can subscribe to events coming from the platform. And you can think about automating recurring manual operation tasks. And as overarching topic, we have the 'automate and optimize' phase, where your cross-functional team looks for optimization potential along the complete lifecycle and realizes that accordingly.
If you now take a look at our demo scenario today, you can now witness the key services from DevOps from the SAP Business Technology Platform. Harald, how does the demo scenario look like we want to show today? Thanks, Boris. Let's have a look.
So we have a developer who starts working in the Business Application Studio, does a change here and then pushes that change to the central source code repository, in our case, GitHub. And then, this triggers the automated pipeline and the Continuous Integration and Delivery service. This pipeline builds the application, performs some tests and then deploys it to the development environment. At the end of the pipeline, it creates a transport request in the Cloud Transport Management service. There, it is now waiting in the queue of the QA environment to be imported.
And that is triggered again by a user, maybe a operator, or another person. And this deploys then the application to the QA environment. By doing this, we are triggering alerts which are consumed by the Alert Notification service, which in this case creates a notification in a Slack channel.
But of course, this could be other channels as well. When the transport is completed, this creates another alert. And this then triggers via the Automation Pilot a regression test of that application and again a notification to the user. When the testing in the QA environment has completed, the next step would be to import the application into the productive environment.
And again, here we are using Alert Notification and Automation Pilot to create notifications, start a smoke test in the productive environment, and notify the customer about the outcome of this smoke test. So that would be our demo. And now let's see how it looks like. In our demo, we will use a very simple HTML5 application, which more or less just displays a greeting with an icon. And I will just show how this application looks like. It basically just displays an icon with this TechEd greeting.
So, we will now change this icon. And for that I'm using the Business Application Studio. This is the coding of the corresponding view.
And here you see the 'hello world' icon so far. And I will now change that to the 'activate' icon. So, this has now been changed and I can now stage this change and commit the change and add a commit message to that.
All right. Now this has been committed to our local repository. And I will now push this change to my central GitHub repository. And in this central GitHub repository, we have set up a Webhook, which automatically triggers our Continuous Integration and Delivery service, which will now be presented by my colleague, Boris. Over to you, Boris. Thanks a lot, Harald.
So, here in our service we have configured several pipelines, as you can see on this overview screen. And for our TechEd demo, of course, we have a corresponding pipeline configured. Let's just briefly open it and you see it already came up with a corresponding run.
And we just go in there and see that it's already running. Before looking into that in more detail, let me just stress that this service really lowers the entry barrier quite dramatically. So you can really quite easily come up with a running build, test, deploy pipeline for SAP specific use cases. While the pipeline is running let me briefly guide you through the process to set up a pipeline. First of all, you can set up some kind of credentials, like for example, Webhook secrets for your repositories, basic authentication like here for your subaccounts or service keys for other services the Continuous Integration and Delivery service works together with. Then we have to configure our repository.
And here you see we also added here further repositories. Like Bitbucket server and GitLab in addition to GitHub. And yeah, we are planning to extend it further in the near future. And then you just create a very simple pipeline. And for that one you can select one of the existing pipeline templates. You see, we have pipeline templates for SAP Cloud Application Programming model.
We have pipeline templates for SAP Fiori - be it in the Neo environment or in the Cloud Foundry environment. And just recently we also added a pipeline template for SAP Integration Suite artifacts. After you have selected the corresponding templates, you just select those stages you want to be part of your automated pipeline.
You see we have a build stage, of course. We can also perform some checks here, like static code checks for the Cloud Application Programming model. We can add additional unit tests here.
And then we configure also the deployment into the target subaccount. Pretty straightforward. And again, we also extend that further.
For example, for the Fiori pipeline templates, we also added automated test stage using your UIVeri5 for example. And we also want to come up with further templates such as for container based applications. Now let's go again into our running demo pipeline. And you see that we almost achieved the end of the stage. So meaning, we performed an initialization of the pipeline. We skip the test here just to make the [demo] test run a little bit faster.
We performed a build. We performed a malware scan. And we performed the deployment.
And you see the deployment is happening into the DEV subaccount not surprisingly. So let's briefly look into our DEV account and see if really this small change Harald did trigger did come up there. For that one, I will just go here to my subaccount, you see the DEV sub account. And here, I have my corresponding HTML5 application.
I will just bring it up here. And then you see here also this new icon came up, as expected. In addition, and you might have also seen that as a short glimpse as part of our pipeline, we have also a separate step here, which is called 'upload to cloud transport management service'. So, here in our demo scenario, we want to benefit from the high agility of the automated pipeline, like verifying single developer changes while still having full control of the propagation of the changes towards your production subaccount, such as for auditing reasons. And for this, we have configured this automated handover of the qualified release candidate into a delivery landscape handled by transport management. So in this example, each pipeline run that is successfully performed - meaning: where the corresponding tests are successfully and everything.
Each of these runs then triggers a transport in the Cloud Transport Management service. Of course, another option could also be to have several pipelines, such as one for standard deployment, meaning to verify the changes without hand over into production - and the release pipeline that can then be triggered separately. But how does this now come up in SAP Cloud Transport Management, Harald? Thanks, Boris, for showing this CI and D pipeline. Here, you see the overview screen of the transport management service. It has now been remodeled in the recent weeks. And here you now see on a first glimpse what is the status of your imported transport requests.
You see, most of them were successful, but there were also some fatal errors concerning the import of the transport requests. You see how many transport requests are waiting in which queue. And you also see how much space has been used out of your overall storage quota. New is also the direct link to documentation and to relevant blog posts. If I now go to the landscape visualization, you see that we have modeled our landscape with DEV, QA, and PROD subaccounts or spaces and the number of import or transport requests in corresponding states.
As you remember from the Continuous Integration pipeline, at the last step of our import of our pipeline, we had the handover to the [Cloud] Transport Management service targeting the QA system. And therefore, we will find our new transport request in the in the QA queue. And you see here the commit ID starts with 5dd. And if I now go to my transport landscape and open the queue of the QA system, you will now see that we have a new transport request here, with the commit ID starting with 5dd.
So this is the transport request we are interested in. And I can now mark that and start the corresponding import. Now I'm asked to confirm - and I will do that. And now the import will be running. And while this is running, I will show you some more features of the [Cloud] Transport Management service. First of all, already mentioned by Boris, is the high level of governance we are offering with it.
And one of that is the audit functionality. So we have a transport action log showing all the activities happening in your complete landscape. So you see with some historical data, when, who performed which step to which system.
As you can see, the recent ones are: we have uploaded the transport request from the Piper pipeline or from the CI/CD pipeline to the QA node. And then I initiated the import into the QA node. At the same time, the transport request was also added to the production queue. Now, it's waiting in the production queue.
And depending on the outcome of the tests in the QA system, we can then decide to import that transport request further into the productive environment. Now let's go back to our landscape. And see how far we have come with our import. And you see this import has now successfully finished. And so I can now have a look into the log file.
And you see that the deployment started at 10:02. And now we can see what has been done inside during this deployment. So what modules, what UI5 modules have been imported in that import. And at the end, after like two minutes, the deployment finished successfully. So as you can see you have a very extensive possibility to check out what happens in transport management. But there are also other options.
And one of them is using the already mentioned Alert Notification. So, if you configure your node correspondingly, you see here switch on this notification flag, then it is possible to use the notifications to your needs. And that will now be explained by Boris again.
Right. For this I just brought up our corresponding Slack channel here. And you already see that you get notified about what is going on in your landscape.
We see a notification about this import did start on the QA note. And that it also did finish there. And that of course, opens up the door for a rather reactive way of working, right. Instead of monitoring what is going on with your application, you rather get notified about what is going on and in case some activity is required. Of course this is a very simple example here right now. But you could come up with much more complex ones where, you only get notified when an issue comes up, for example.
What is the background here? As Harald mentioned, we have the SAP Alert Notification service. So let's briefly open that one here. And we already see that we have several subscriptions. So this service really allows you to subscribe to events coming from the SAP Business Technology Platform - be it from used services, like SAP Cloud Transport Management service, be it from the hyperscalers, if you have some persistency there running on the hyperscalers, for example.
You can also come up with custom alerts. And then just specify the channel of choice in which you want to be notified. It's very simple, so you see here I can create a new one. And that just guides you for a very simple process, where I just specifiy some metadata of the alert, like the name and description. Then, I select the condition, meaning when do I want to be notified. Is it if something happens from a service, or from my own application.
Then I select the action, meaning do I want to be notified via an email, via the Slack channel, via Microsoft Teams, via direct integration into SAP Solution Manager, for example, or SAP Cloud ALM. So you have that very tight integration - but also openness: of course, you can also then use APIs to channel that into other third party offerings. And that's it mainly. And here you see some examples.
I mean, here we have, for example, the notification sends a Slack notification whenever a transport did occur. And that's very straightforward. The other thing you have seen in our Slack channel is that the successful import into the QA node also did trigger an automated test that is being performed by our SAP Automation Pilot service. And also, you see I get notified about the outcome of this test.
That is really something we did not have to trigger, that was automatically being performed. And the SAP Automation Pilot service just brings catalogs of automated commands. So let's briefly look into that service as well. Here, you can see you can come up with own catalogs, meaning, also include own scripts.
We bring a lot of commands - several hundred commands are part of the service already. And we are constantly working on extending that. There are commands around DevOps, lifecycle management handling, alert remediation, and so on. All with a clear focus on SAP scenarios again. And that's of course one of the benefits again. Here you see also corresponding executions.
You can also then come up with corresponding commands. And here we see that we have performed the corresponding test. So let's briefly look into that one here maybe. You see that command is very simple that we have added to that one. You see that the concept is always that you have some kind of inputs for the command, then some action and then an output.
Very simple. Very straightforward. Here in our example for that very simple test, we see that we are just opening up the corresponding URL of the application. And then have a check and a validation, if there's an issue with showing up the page.
And then corresponding state is defined. Very simple of course here. But of course, you have a lot of flexibility. And I'm pretty sure that many of you might use that for other contexts. But here, the main advantage we see is that very tight integration with the other SAP Business Technology Platform services.
They really go hand in hand. And it's very straightforward. And that is also something that you see here in the Slack channel again, right. I mean, we see that we have the notification about what has happened. And then, Alert Notification did also automatically trigger the Automation Pilot service to perform that automated test. And that's not all.
This automated test, meaning-- saying, OK, this quality assurance test could be completed, could now also have triggered then the propagation of the change towards the production environment. And with that, we are back into the SAP Cloud Transport Management service. So, Harald, over to you again. Thank you, Boris.
Yeah. Let's pick up where we left the transport management. Here you see the import queue of the QA system. The import of the last committed change was successful. And I will now just have to prove that this change really reached the QA system. So, I'm switching back to my cockpit and use now the QA environment.
So, here's the QA environment and let's look into the application. Open it again and we see that the 'activate' icon has reached the QA environment, as expected. Now, let's go to the transport management again and switch now to the production queue.
So I go back to my visualization, open my production queue. I have now based on the positive outcome of the QA test, which was done automatically by the Automation Pilot service, I have now the confidence that I can start the import into the production environment. As Boris said, it would potentially be possible as well to trigger that automatically from the Automation Pilot itself if we set up a command accordingly. But now I'm doing that here manually after checking the results of the QA test. And now this import is running.
While it is running, just another feature we have for transport requests, you can also look into the content, if it is an MTA (Multitarget application) content. Then, we display which modules are contained inside that multitarget application, just as additional information. So now, after a few minutes, the import has successfully completed.
We have now imported that change into the production environment as well. And I will now again check if that has reached the environment as well. So I now go to the production environment and call the application from here. And if I now open that again. And again, the 'activate' icon has reached its target. And I think we will have a last look into the Slack channel.
And for that, I hand over back to Boris. Right, Harald. Here we are back in our Slack channel. And we again see first of all the notifications of the imports into the PROD node, so that this finished successfully. And we also see again that SAP Automation Pilot again performed this time a smoke test, also in production and that everything is fine.
This now concludes our actual demo. And now we will take a look at what we have seen and how you can make the next steps here. OK.
I hope you have gained our first feeling how everything can come together. You can start very simple with our SAP Continuous Integration and Delivery service, which allows you to come up with an automated build, test, and deploy pipeline almost out of the box for SAP typical use cases. If you should need more flexibility, we have you also covered with our project 'Piper', which allows, for example, to download pipeline templates from GitHub.
For the propagation towards production, we allow the hand over into transport and change management. For example, there you can define delivery landscapes and can directly control who is allowed to come up with changes and handle those in which subaccount. And this information is then also stored inside a system for auditing reasons. For the operations part, we have seen that you can subscribe to alerts coming from the platform and the application and react on those, which is a more reactive way of working. And that you can automate recurring manual operation tasks with the SAP Automation Pilot service.
Harald, if our customers now want to try it out on their own, how could their first steps look like? So if we want to dive deeper into the topic, first, we have lots of learning material available for you in the DevOps SAP community page. We have an openSAP course, which run beginning this year, which is still available in read mode. So that is a good source for lots of information.
And we have webcasts, we have a learning journey - so, lots of things to choose from. If you want to try it out yourself, we have blog posts by colleagues who are describing two scenarios which involve quite a bit of our tools. We have a trial environment in SAP BTP, which offers most of the tools we have shown.
And we have the free tier, if you already have subscribed to SAP BTP with some specific subscription types, then you can use the free tier form of our services. And we have missions in the Discovery Center. And we have tutorials for the CI/CD service. If you have ideas what we could do better, we are very keen to get those ideas.
So, please use your possibilities to influence our development. If we look into which use cases we are covering mainly, on one side we have the extension use case. And on the other side, we have the setup of side by side UI extensions.
So we are in this extension area with our tools. And if you want to learn more of course, TechEd is a great place to do so. Please make use of it. I want to thank you that you have joined us. And Boris, last words for you. Thanks a lot for joining.
And enjoy the rest of SAP TechEd.