Tag Archives: rant

What is “table stakes” for API functionality?

What is “table stakes” for product development these days? Do you believe that we should be following an “API first” development strategy?

I recently (well it was a year or so ago) came across some cool SAP SuccessFactors functionality which allows for a person to be put temporarily into another position, returning to their substantive position after a fixed period. Many of you will know this as “Higher Duties” or “Temporary Assignment” functionality. The problem I found (more recently) was that whilst most of the information about the temporary assignment was available via API, one key piece of information (very important information), the end date of the assignment, was NOT available via the standard SAP SuccessFactors OData API. ☹

Surprisingly, and luckily, the information is available via the old SFAPI (a SOAP based API, and we all know how much developers love SOAP!) So I can work around this for the moment.

This said, to my thoughts, new(ish) functionality, like HD/TA for SAP SuccessFactors should always allow for the data that can be seen in the frontend application to be available via the OData API. That should be “table stakes”. A modern HRIS must be supported by a secure, open, documented and complete API. I often ask potential customers during sales presentations, to compare SAP SuccessFactors with the other products they are considering to Google “name of HRIS solution” + “API” and see what is returned, whether they need to sign in to see it, are there supporting document, examples, etc. SAP SuccessFactors pretty much always wins this game (mainly because Microsoft and Google don’t have competing products in the space, because they use SAP SuccessFactors.)

SAP has come a long way in making its products somewhat developer friendly, so it really irks me when I’m told “We do not have this on our roadmap at this point in time, please raise an influence request”. So I did – https://influence.sap.com/sap/ino/#/idea/308298

If you’re an SAP customer and you manage to read this far into this post, please do me a solid and log in to vote for my influence request! Thank you! Whilst you’re there, have a look around there are loads of requests that would make so much sense to implement, vote for them too.

Customer voice is so important – and this “influence.sap.com” space is one area where we as customers actually get a little bit of a say!

Girl hiding behind tree

Permission to be seen (or not).

Hello! Time for me to go on a bit of rant again. So far, these little rants have been very successful! With support from the community (and demonstrating this to the SAP SuccessFactors leadership group), we’ve pushed the dial a few times in the right direction (well, I thought it was the right direction anyway!) Although, I’m perhaps not as optimistic about this one… let’s see!

The “Reimagined” Home Page (a naming that is going to get tired very quickly!)

A little while ago (quite some time actually – end of 2020) SAP announced that it was going to retire the existing Fiori Launchpad style homepage and move to a new “reimagined” home page. The reimagined home page had been demo’d during a few SuccessFactors conferences and looked quite exciting. The new home page is pretty cool. The whole concept of having the things that are important to you right now highlighted and brought to the front is a good one. Show me what I need to act on right now! Make me do it quickly!

Image from SAP Help – https://help.sap.com/doc/62fddbd651204629b46bbccbabf886ba/2011/en-US/e13ea0e595b148edbb44f424f1a00b7c.html

This said – it seems that it’s still a bit of a journey to parity with the old home page. (which may never happen, given the different idea that we’re working with – some things just wouldn’t make sense in a one-to-one mapping.) However, relatively important pieces of functionality, like to-do notifications, manager team tiles and ability to use with onboardees is still being added. There’s also the minor/major issue that you can’t do a refresh to or from any system that has the reimagined home page implemented using the instance refresh tool, you have to request SAP to do it. The plan is that by end of 2H2021 release, everything that is needed to go live with the new home page will be added and these issues fixed. (And hopefully we’ll be able to have that text in the middle of the top panel in some colour other than white).

The (forced) migration to the reimagined home page

There was a plan to push all customers to the new home page by 2H2021. (I just can’t manage to keep typing reimagined… it’s so going to get renamed to “Home Page” as soon as it’s the only option. Can I chalk up another product renaming before it even happens?) But then due to some functionality not going to be available until then, there was this strange idea to push the release universally to all customer’s preview instances 2H2021 and then production 1H2022.

So, I had a bit of a rant about how it really didn’t make sense to push the new home page to all customers’ preview instance before all of the fixes were rolled out and customers had some time to test them. (It wasn’t just me that had this rant – lots of community support for that idea). SAP have now pushed back that idea, we should instead get the universal push in 1H2022 for all customers. So, you’d better get ready for it! Cos it’s coming!

Okay…  so what’s the problem?

Well, see the thing is, the main reason that we need the new home page experience, is also the main reason why the existing experience is so useful.

Image again from SAP SuccessFactors What’s New Viewer – https://help.sap.com/doc/62fddbd651204629b46bbccbabf886ba/2011/en-US/e13ea0e595b148edbb44f424f1a00b7c.html

Note how there’s a lot of content on the old layout compared to the new one… well that screen shot is pretty minimalistic compared to some customer instances I’ve come across. (And, I’ll admit, helped implement.)

Here’s a screen shot of one of our demo systems there is a LOT here.

Screen shot of a system that’s using a lot of Fiori Launchpad tiles

The original idea of the Fiori Launchpad was that a user would be able to see all their important information in one place and drill down to bits that stood out. Of course, that doesn’t work because having pages of stuff means people don’t look at any of it. So the idea of using machine learning to figure out which bits to surface for a person to look at, makes great sense.

The problem is that in many cases SuccessFactors doesn’t know what’s important.

This is especially the case for extension use cases and the “We don’t use just SAP SuccessFactors for all our people processes” use case.

Here’s a couple of examples from one customer I work with:

“My Team” tiles: approve leave requests and Leave balance for my team

These two existing home page tiles, “Legacy” I believe the terminology is now, link off to BTP Cloud Portal, which then uses SAP Cloud Connector to tunnel through to an on-premise SAP ECC instance and display Fiori based apps showing leave balances and approval apps that are based on data still stored in SAP Payroll, not in SAP SuccessFactors.

It is exactly the same with these tiles:

Employee Self Service links to payroll (non ECP) applications

The customer also has additional BTP based extension applications – here’s one example:

SAP SuccessFactors Extension running on SAP BTP

All managers get the team leave balance tiles/applications and all employees (not contingent workers/contractors) get the leave and payslips apps. And everyone gets the Network Compliance App (it’s really cool btw, if you need something like this, please give me a shout!)

In the new home page these would ideally be part of the leave management quick links or approval tiles that popped up as needed, payslip tiles that appeared when payroll has been sent to bank (dreaming here about next gen payroll, but you get the idea). However, because all these tiles are just links out to other systems/application, they can’t be part of the “intelligent” framework and instead must be part of the “Organisational Updates” section of the homepage. And take up about twice as much screen real-estate. There’s also a limit on the number of tiles that will show in this section, so you’d better hope you don’t have too many custom applications/extensions that you want to link to. (Originally the SAP team had thought to limit the number of “cards” in this section much more, but fortunately a few more are allowed now. (Feedback works!)

New Home Page “Cards” – graphic is required. Sizing approx 1.5-2 times larger than existing tiles

Managing Sections with Permissions

So, whilst I would argue that custom cards look really bad in this new experience (they are big, chunky and not sorted into any meaningful categorisation, certainly they are not “Organisational Updates”! – but that’s a topic of a different set of feedback!), we’re only just now getting into the crux of this particular rant, which involves figuring out how to limit who gets which tiles/cards. In the “legacy” home page there is functionality which allows an administrator to create “Sections” on the home page. These sections can then be shown/hidden based upon Role Base Permission (RBP) roles and groups.

Capture of the “Sections” area of legacy home page configuration – access via “Manage Home Page”

It’s as easy as creating a section and then using the drop down to pick as many roles/groups as you want to allow to see that section.

Multi-select list of all permission roles and groups.

One particularly useful thing is that you can choose the system generated roles used by compensation, but any role can be used and this can can be one that is assigned to a population of “Managers” for example:

Standard role assignment screen, where a role can be assigned to a permission group or also to a set of automatically determined user populations. Very useful!

This gives a particularly powerful way to assign access to sections based on system generated subsets of the employee population.

Then we can simply assign whichever tiles we want to whichever sections we want and hey-presto, we have managed to use RBP roles and groups to restrict which users have access to which tiles.

There is the downside that quite often we ended up with a section with only one or two tiles in it, but that wasn’t so bad.

So, again, where’s the problem? Well, the thing is, in the new home page, you can’t do this!

Managing custom tiles with Home Page groups

The solution that has been adopted by the new home page is one that also existed in the legacy home page, but we didn’t use because it’s (my opinion) rubbish. When you create a custom tile you can assign it to a “User Group”.

Final step of creating a custom tile (very similar when creating a custom card) – assign a user group to the tile/card

It is possible to edit these user groups:

Dynamic Group maintenance for home page tile groups. (very similar to permission group maintenance.)

You’ll hopefully be familiar with the layout of the editor as it’s the same one used in managing role-based permission groups.

However, note that you are creating and maintaining “homepage tile groups” and not RBP groups.

There are some serious restrictions here – you cannot make these groups contain all managers for example or any of the other automatically built permission role assignment groups.

If you have an extension application that relies on the end user accessing the application having a certain set of SuccessFactors role-based permissions, then you MUST ensure that any editing of the home page tile group that includes the list of people who can see the custom tile that links to the extension application MUST also update the associated role-based permission group and the two must not get out of sync.

Work Arounds

Well – if you have an extension application that you only want to display to managers as a custom card, well, you’re pretty much stuffed – you cannot use the new home page tooling as it works today. The only way would be to manually maintain the list of all employees that are managers in your organisation. And, ummmm, sorry, this ain’t happening!

I’ve considered building tooling that could automatically maintain these groups based on similar logic to the existing home page, but unfortunately the APIs for dynamic groups are all read only and cannot be used to update a group.

The only things I have so far come up with to enable effective filtering and using custom home page cards are:

  • an additional extension application that is launched from the home page and then provides another view of which “additional” applications a user can access. In effect, a secondary launch page which provides the functionality to filter links to applications based on permission roles and groups that the new home page does not.
  • An application/integration that regularly goes through all users and updates one of the custom fields to a value which can then be queried by the home page tile dynamic groups (i.e. populate a “is a manager” flag against custom field 14 or something. )

In standard configuration it is still (thankfully) possible to use permission roles and group to decide which items are available in a given user’s navigation menu. Just like the logic that allowed sections to be permissioned in the legacy home page.

Configuration of custom navigation – menu item access can be controlled by permission groups and roles.

So, it is an option to remove these applications completely from the homepage, and just have them in the custom navigation. For those customers that don’t have a crack extension development team to provide a custom tile that can handle using RBPs to provision or not a secondary launchpad, I think I’d be suggesting that they build a custom tile that explains how to find the now missing links in the user menu.

Plea for support

Okay, hopefully you can see now why I’m really worried about the forced push to the new home page in 1H2022. I have spoken at length with the product team for the reimagined home page and whilst they see the potential issue, my feedback to date has been that they do not envisage fixing this issue before the universal migration to the new experience. They actually encouraged me to write this blog post because they want to see if others believe this to be an issue! I did ask them to just run a query on any existing customer that were using the permissions in existing home page sections, but I haven’t got a response on whether that is a large number or not.

SO… if you’ve read this and thought, “Crap, that could be something that’s going to cause me an issue!” Please, please, get onto the SAP SuccessFactors community web site – post your comments on the Migration to Reimagined Home Page Within 1H 2022 Release – Innovation Alert blog post.

Give me some “Kudos” for my comment on that post about this issue: https://community.successfactors.com/t5/Platform-Resources-Blog/Migration-to-Reimagined-Home-Page-Within-1H-2022-Release/bc-p/268878/highlight/true#M1833 and write about your own concerns. If we have enough customers express to SAP SuccessFactors that this will cause a problem we might just yet, get a solution.

Cheers! Here’s to getting the community involved!

I have some feedback for you…

Image by mohamed Hassan from Pixabay

Right, this feels kinda awkward… I’m about to give Microsoft kudos and point out how I wish that some SAP processes were closer to what I’ve seen from the team from Microsoft. So bear with me if I seem a little less hyperboly than regularly…

This isn’t the Microsoft you remember

Recently I was working through the options for integrating SAP SuccessFactors personnel records into Microsoft AD, it’s something that every organisation that doesn’t have a dedicated IAM or (IdAM, however you want to make up your TLAs or FLAs) is likely to need in their environment. Have to say, I love working with new “start-up” orgs that don’t use an on-prem AD, but there’s not quite so many of those that are large enough to pick up SuccessFactors that they are probably still a minority.

Documentation is a skill that is distinct from development

Anyway, I happened to look at the Azure AD online doco about SuccessFactors integration and discovered it had been written by a developer. Well, that’s a guess, but seriously, who digs through the results of an API call to get config values out of a system when you can just use the standard tooling to do it? And then makes some poor sod document how to use Postman to do the same? So I suggested an update.

Arrrgggh!

no – just use the UI!

So, I was feeling benevolent and thought I’d offer my advice that perhaps there was a better way. I clicked the feedback button…

Shock horror – I wasn’t redirected to another site and asked to create a new user, I was asked to create a Github issue! (Okay if you don’t have a Github user, you’ll be asked to create one, but seriously, you don’t have a Github user id?)

https://github.com/MicrosoftDocs/azure-docs/issues/62443

Totes easy!

And now we wait… or not

Then I resigned myself to never hearing back about it again… But I did!

Issue was triaged and assigned to the document author to review that very same day! (that’s not normal is it?)

I was – wow!

Then things got surreal…

Not only did I have someone look at and action the feedback that I gave, they then went and found my tweet on the subject and personally responded to it! Wat?

And now, the update to the documentation is about to go live:

And hopefully that will make some poor consultant/tech support person’s life a little bit easier.

Meanwhile, back at the ranch

So let’s compare and contrast. And I know this isn’t apples – doco is different to application UI changes, but, lets compare the process at least.

I was working on the new SAP SuccessFactors IAS/IPS integration on my own company’s system when had an issue – I couldn’t figure out how to change some value in the config. Fortunately there is a partner community that SAP have set up for partners to discuss these sorts things and get some assistance from each other.

(Sidebar – Yes, I know it’s a bizarre idea, consultants helping our competing firms consultants do stuff. But in the scheme of things, the other consultants are all good people, they just aren’t lucky enough to work for my company, and helping others tends to do pretty good things for your own internal skills too.)

If you don’t know, then ask!

So I raised the issue in the forum and the really nice SAP person how has to read all my grumbles and moderate the forum raised it in the fortnightly call that SAP hosts for partners (it’s at 12:30am my local time, which makes it a bit fun, but better that than 6am!)

And there was already a solution! WOOO HOOO!

Pretty cool, so I had a look..

https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/en-US/be6d6f210d30404d827f8c9e78ec4489.html

If you have to attend training to do something, it isn’t intuitive.

Let’s just say I wasn’t impressed with the UX and I realised why I hadn’t figured out how to do this myself! Because colouring something BLUE in an SAP UI5 app is possibly the least intuitive thing on planet to do to indicate that it is editable if you click it. Possibly the developers had played one too many games of Day of the Tentacle and thought users needing to randomly waving their mouse around the screen to see if it changes pointer shape is a good way to indicate to people that something is clickable? (Okay I doubt that was actually the case, more likely someone threw a guideline at them that didn’t make sense and they had to get inventive to work around it (been there!)). Pretty much everything in standard SAP UI5 apps are cyan or blue, and I’m not checking everyone one of them to see if it’s different.

So I gave some feedback on the forum.

I even tweeted about it. Cause that’s what you do, right? (Well it’s what I do. I mean there’s a certain type of person who stays up late at night writing blog posts about these sort of things, so what do you expect?)

This then lead to a bit of a conversation in my DM’s with someone from SAP (since it was DM’s I’ll not share, private stays private) who suggested that I really needed to raise this with support since it was an issue, and that helps track that people have issues. Likewise in the forums I was directed to raise it formally.

To whom it may concern

So I raised a SAP Support ticket (low priority since I already had a fully working work-around.)

I would happily have bet on the response, and I’d have won!

Thanks Mike – yep, the ole “Raise an enhancement request” gambit. That place where good ideas go to die.

“Once more unto the breach, dear friends, once more;
Or close the wall up with our English dead”

But by this time I was, “right whatever, lets see how far this sucker goes!” So I raised that enhancement request.

Oh – and whilst I was doing that, I came across a small issue…

The feedback site is hosted in Europe. I am not in Europe. But that’s cool because there’s this concept called CDNs, yeah, that allow large websites that are accessed around the world to be accessed in a reasonably fast manner from everywhere.

Yup – CDN wasn’t enabled. It is now – so the rest of you can thank me for suffering on your behalf!

Sod it though I’m gonna get this bugger filed! Oooh flashy light on my phone…

Anyway after much self flagellation

I got the request raised! had to attach my diagram as an “attachment” not able to be viewed inline in the request – but hey – it was raised.

And there it sat…

Tick tock…

Three weeks later my request was “Acknowledged”.

What’s the German word for “million to one chances happen constantly”?

And in the weird way that the universe works, whilst I have been typing this up, I got a comment on the request, strangely I didn’t get a notification (yet) but keeping my fingers crossed that sometime tonight. I did just check my spam email folder too, and interesting that it’s about 30/70 banking phishing scams and webinar invites, sure it used to be far more interesting. But nothing to notify me that my request had an update.

The really nice lead designer for the product reached out and asked me what I thought about their thoughts about making some UI changes to make things easier to use!

The response was awesome! I Loved it!

they ended the message with a request for my thoughts!

“Please let us know what you think.”

YES, YES, YES!

Well – I can say I was totally stoked, so happy! And then I tried to find the button to reply….

The irony of wanting to reply to a conversation about improving UI to make things more obvious and easier for people to use and then not being able to because the UI of the tool in which the conversation is happening doesn’t facilitate it.

Anyway. I did what I always do. Tweet lots, then try to figure out what to do…

It would appear that someone thought that it would make more sense for new comments to appear at the top of the conversation, not the bottom. So by clicking on the comments “tab” at the top of the page I was navigated up the screen and saw that I could enter a new comment. I did. And I tried to be very nice in my feedback (given the amount of huffing and puffing I’d been doing seconds before.)

Two ways of doing things, both with good result

So, we have two different scenarios, both ended up (or will hopefully end up with) some change in the product as influenced and suggested by me. Two out of two is pretty good going. However, one took 3 weeks, the other, over 3 months. One was painless and easy, the other painful and frustrating. As I said earlier, we’re not comparing apples to apples – getting a product changed is much harder than getting some doco changed. And I have heard anecdotally that some areas of SAP are even faster:

Interesting to note that the area that got the same day response that Robin mentions is also using the Microsoft Github tooling to manage issues. I wonder if tooling impacts delivery approach?

Yes – AND?

So what do I want to achieve by writing all this (other than hopefully amusing a few of you with the tale)? Well, I think it’s important that it’s documented how difficult giving good and constructive feedback can be. Only by taking a look at what’s happening can we get on the right path to working together to make everything we do better and easier.

I’ll finish by just mentioning that EVERYONE that I have dealt with when providing feedback at both SAP and Microsoft have been AWESOME. Both organisations do understand and value feedback. It’s not a people problem.

OAuth Client Credential Token APIs

So I’ll probably get hugely humiliated by writing this post – but then again, how do we learn without failing…

I today had the chance to watch DJ Adams run through one of his #HandsOnSAPDev live streams: (it start’s 3mins 30secs in if you want to watch)

It was an interesting watch and makes me long for those days when I could just play with APIs marked beta and hope to get away with not having to do a massive code re-write several months later.

Anyway – during the episode, DJ was using the “standard” OAuth Client Credentials flow and I had to ask myself – “Why is this any more secure than just using basic authentication?”

I tried to put the point to DJ – but there’s only so much one can do in a chat window, hence this post.

I have since tried to do a little research on the topic – and found this rather good Stack Exchange discussion on the topic:

https://softwareengineering.stackexchange.com/a/297997/369892

Both the question and answer made a lot of sense – read if you like – but I’ll work through the points – in my own style – which is to illustrate with bad drawings.

OAuth Client Credential Flow explained with bad drawings

In the OAuth Client Credentials flow – one system (Bob, our client) gives another system (Dave, our authorisation server) his special secret key.

Bob uses his secret key to authenticate himself to the authorisation server. In the example DJ worked through authentication was in form of an authorisation header with <clientid>:<clientsecret>. (And a body that contained the user’s username and password – this is useful for API’s that need to pickup a given user’s credential.) Note that nothing here is encrypted beyond the usual transport encryption. I’ve seen many implementation of this process where the username isn’t actually needed because the particular client id and secret are associated with a particular system user. (I’ve never seen any other one where the user’s password is needed – noting these API’s are beta!).

Once Dave (our authorisation server) gets our secret, he checks it is still valid (kinda like checking a password… no wait, exactly like checking a password) and then gives us a limited lifetime token to use the API. In the example DJ worked through it also checked the username and password – strange, but hey why not? (Other than it being a TERRIBLE idea that any server should need to store a user’s password as well as client id and secret!)

Now, according to the OAuth standards, Bob could have asked for the token he picked up to be scoped to only allow certain access. But because Bob is a little bit lazy and Dave doesn’t insist that he asks for scope, Bob never does. If you got to the oauth.com website and check out the client credential flow (https://www.oauth.com/oauth2-servers/access-tokens/client-credentials/_ you’ll even see they mention that hardly anyone ever uses scope in the flow.

Your service can support different scopes for the client credentials grant. In practice, not many services actually support this.

OAuth.com

Plus Bob likes reducing his interactions with Dave to make things faster, so one token to rule them all is far easier and more generic. Bob might be a programmer (if he weren’t a system… stick with the metaphors people!)

Bob now has his limited lifetime access token he can use to authorise the API interactions. So he goes to make call to the API server.

Imagine Bob’s surprise when talking to the API server it looks very very much like Dave. But it’s not Dave, it’s Fred the API server. In DJ’s example the authentication server was accountblah.authentication.region.hana.ondemand.com and the API server was accounts-service.cfapps.region.hana.ondemand.com. Slightly different names – and actually resolve to different IP addresses too! But if I look at the SuccessFactors implementation of this similar token logic – both sit on same server (from an external view – who knows or cares what happens internally). Anyway – Bob uses his token to request some data from Fred.

Fred then goes away and checks that the token is valid. When the token is sent over to Fred, it’s not encrypted in any way or signed with a special key or anything. In DJ’s example it was just in the Authorisation header as “Bearer <access token>”. The security of this exchange was relying on the transport encryption – just like the original request to get the access token.

Fred may well be wondering if Bob is ever going to send him a request he doesn’t have scope for, might need to have a chat to Dave about that… But he validates that Bob has a token that is still valid and that is valid for the requested action (get list of sub-accounts for example.)

So what makes this more secure?

In the exchange I just documented, I cannot see how taking the extra step to pick up an access token to call Fred has made the exchange between Bob and Fred more secure… The only things I can think of are:

  • Conversations to Dave (the authentication server) are treated more seriously, we take extra special care to not record them or allow anyone to snoop on them because the client secret is long lived (like a password).
  • Possibly means that if we take less care that conversations with Fred leak then the impact will be short lived due to the token expiring sooner.
  • yep – that’s about it.
  • can’t think of anything else.

Frankly – for the increase in hassle I’m not seeing an ROI on securing my API calls. Especially as for many implementations of this sort of logic the token API runs on same server as the main application API. (Dave == Fred)

What makes this useful then?

This is different thing – and goes to identity rather than authorisation. With the client credential approach I can config my calls to the API server to be treated as if one of the system users is making the call and not a generic API user. I have one password that I use to get access tokens that allow me to “pretend” to be any user I want to be for the purpose of fetching/updating data via the API. This is something that I would use all the time in SuccessFactors* – it lets me query data using the user’s permissions. Very useful! SAPCP is set up to do this. I believe this is how, for example applications running on SAPCP can use OAuth Bearer destinations to access API calls as per the logged in user – even though the user is not logged in to that remote application. We can’t do lots of client side SSO, because browsers have gotten wise to applications doing SSO to remote system inside frames (SSO to a remote server requires JS running on different domains generally and falls foul of Single Origin restrictions). So solutions like SAP Cloud Portal and now SAP WorkZone use the SAPCP destination service to call OAuth Client Credential flows to get access tokens as per the person that is logged into their solution. Obviously this requires trust set up – which is the the client id and secret.

So Claire (an actual person, not a system for once) comes along and asks Bob (the system) for some data that’s on the system that is Fred (are you lost yet?). Bob asks Dave for an access token that will allow him to ask Fred for stuff on behalf of Claire. Dave, being ever so obliging and having verified the client id and secret, then gives Bob a temp pass to pretend to be Claire when speaking to Fred.

Okay if you’re following that, you’re doing well. But really – that’s why we use these credentials. It has nothing to do with security and everything to do with making life easier for system to system communication when pretending to be other people.

* Now to explain that aster a bit up the page. I would possibly use this logic in SuccessFactors, but I don’t because it requires that the user that “Bob” is pretending to be needs to have API access or Fred refuses the call. Giving all users API access is not a good idea at the moment in SuccessFactors because of the way that certain fields tend not to be hidden or controlled in API access compared to front end access.

Summary

So, to summarise, I cannot see any real security benefit to using OAuth Client Credential flows over Basic Auth unless you are looking to distribute your development spend on making one area of your codebase more secure than others. Even then it’s not that much better. If you’re able to intercept and abuse a basic authentication flow, you’d be just as likely to be able to intercept and abuse an OAuth Client Credential flow. Indeed because organisations tend to use the Client Credential flow as per the example DJ had (with the credential applying to a given user) or like SuccessFactors does, it actually open up a whole new security issue… It’s not just one “user” that might have their credentials breached, it’s anyone that the system is allowed to impersonate.

Okay – go at me – I’ve missed something – else we wouldn’t be using OAuth Client Credentials for the sorts of API calls that DJ was making in the video.

I note there are many other OAuth flows and some of them are much more secure – they use PPK encryption to ensure that messages are signed and headers never could leak credentials like Basic Authentication can. But the client credential flow – hmmm this one, in the use case where it’s a single user, not “impersonating” anyone – there’s no benefit over Basic Auth and another communication round trip to have to deal with.

How to break a shared authentication solution

Okay, even with the number of diagrams in my last post, there was still some confusion. So, I’m going to try to make it totally clear. Emphasis on “try”.

Regression Testing isn’t just about regression testing one part of a solution

Firstly, I got some feedback from SAP Support when I tried to raise this issue with them. They didn’t seem to understand my concern that I tested the whole recruit to hire to fire employee life cycle every time that SAP released new functionality for SuccessFactors in the preview environment before it got released to the productive environment. This was because:

Since IAS productive and test instances are of the same version, there is from IdP perspective no difference at all.

As already mentioned above: IAS test- and productive instances are of the same release. There is no such thing as a preview environment for IAS.
Thus again one single IAS tenant will be fully sufficient to handle the described scenarios from IdP perspective

I’m not sure how I can make this clearer. But, yes, I do want to test that during the preview release update period the changes that SuccessFactors make do not impact the provisioning and login processes that I have configured in IPS/IAS. I also want to ensure as well as ensure that these are working in the non-prod environment so that if I push up a change to fix a release impact, it doesn’t break stuff. I’m confused how the Identity services teams seem to think their solutions works in isolation! If it isn’t for the systems that use their services and provide the user details the solutions aren’t worth anything!

How does this all look if we try to connect it up?

Okay – here’s a relatively complex diagram. It shows how you’d have to wire up the provisioning and authentication trust when using more that one SuccessFactors system with a single IAS.

But there is the slim possibility it could work!

And it could work provided that you:

  • Did some configuration in the IPS and IAS that SuccessFactors has not at all been documented for customers
    • that populated userid into different custom attributes of the IAS user record per system
    • then used that system specific field in the assertion sent to the different systems.
  • AND – never maintained different employee attributes for the same email address in the two systems (if you don’t want the system to get hella confused.)

Unfortunately that’s about as likely as me winning the Powerball lottery. Whilst the first point is terribly technical and pedantic (therefore will be loved by half the people I know and hated by the other half) the second point pretty much means this is never going to happen. The reason for the second restriction is that you cannot have the same email address against multiple “user” records in IAS. Whilst through some technical wizardry I might get the same record to point to two different system ids, whether it is active or inactive is a non-system specific piece of user data. What name that user has is a non-system specific piece of user data.

So assume I do have the same email address assigned to an employee in both systems A and B. Terminating that employee in system A will cause a delta in the employee record that will get picked up by the IPS and deactivate the user in the IAS. Even though they should be active to be allowed to log in to system B, they won’t be able to as they are now “inactive”.

It is possible that customers will just decide that they will ensure consistency between user records in different systems, use same name, have both active, have both inactive, but I very much doubt it.

Likewise not using the same email address for employees in both systems is going to be hard (not to mention hard to track if it mistakenly happens).

Would be nice, but unlikely

In my previous post, I assumed that the previous scenario (especially as it involves undocumented configuration) would not be customers’ default. There I assumed that customers would use the default configuration that is deployed when a customer implements the SuccessFactors IAS “upgrade”. That then allows for all sorts of mischief!

How to abuse authentication when you control the data it is based upon.

Let’s have some fun. Assume that default IPS configuration is being used and the employee records from systems A and B are both trying to update the IAS user master record. Assume system B contains sensitive payroll data (for example it is a copy of productive system). Only Annie Admin has the roles in system B to see this data. I, Chris Creative, have access to system A where I’m doing some project work. I have a role where I can hire and fire employees (tends to happen in HR systems!)

  • I firstly terminate an employee with Annie Admin’s email address in system A. If there isn’t one, I’ll just hire her then term her.
  • This will trigger the IPS to update the IAS with the user with Annie’s email address as inactive so Annie can’t log into either system and stop my nefarious fun.
  • Then I hire a new employee that has the same personnel number as Annie had in system B and put a newly generated email address (that I have access to) against the employee’s record
  • I get emailed by the IAS that I have a new user record set up 🙂 what a cool system! I set/reset the password.
  • Now I can use this email address to access system B and it thinks I’m logging in as Annie! Which is awesome as when they try to trace who downloaded all the payroll details from the system the audit logs are going to clearly point to Annie’s user! (Okay yes a bit of further checking and someone might see that it was my IP address, but I can hide that using a VPN, the audit logging in IAS is hard to get to (API access only) so going to be hard for them to find the random gmail address I made up and trace it.
  • I access a bunch of data I’m not supposed to
  • I go back to system A, change the email address of the fake Annie back to her email.
  • She gets reactivated
  • I just got away with accessing a system that I had no rights to access because I had rights to another system that was provisioning the same authentication system.

Are you worried yet?

Can you see now why I’m a little worried about this setup? This is why I also assumed that to prevent this sort of thing happening, most organisations if forced down the path of using just one IAS will choose the most secure SuccessFactors system to provision user data to the IAS. But that, as I wrote about in the previous post will cause a bunch of other problems.

Simple solution

In case it’s not obvious enough, there is a simple solution. Provide another IAS instance per SuccessFactors instance that a customer has.

Please sir, can I have one more?

So, firstly, at this moment in time it feels a little petty writing about this. The world is just starting to realise how huge a problematic space we are in with COVID-19. So I wish all of you out there reading this, health and the wisdom to look after yourselves and others. So, now back to me being needy.

tldr;

SAP SuccessFactors are only offering customers one non-prod IAS system. One non-prod IAS is not enough. Read more to understand what that is and why I think this is a mistake.

A new service to support new functionality

Recently SAP SuccessFactors announced that they are going to move all customers to a new platform authentication solution which uses SAP Cloud Identity Authentication*.  This new background service is a requirement to use the new People Analytics that use SAP Analytics Cloud and is also required to use new internal facing Career Site builder functionality. In short, to take advantage of some of the newest and coolest looking features of SuccessFactors, you’re going to need to implement this!

*There’s a handy overview video (strangely enough not behind SuccessFactors community wall) that explains in detail what SAP is doing, it’s a little more in depth than the detail I provide in this post, it’s not bad although very detailed.

Why are SAP SuccessFactors doing this?

Honestly, this is a good move. Authentication is a common problem in all cloud based solutions, so why not have a single service that can be leveraged by all SAP solutions to solve this? Customers get the benefit of multiple development teams within SAP all pooling to produce a better product, SAP gets the benefit of reducing cost of having to maintain and upgrade their authentication solutions for each cloud product. Win – Win!

balance of cost to SAP vs benefit to customer, for once, both sides of the scale seem to be wanting to go up!
Not often the balance seems to favour both sides!

It’s not yet a requirement for customers to migrate to this solution (and yes for many customers this will require some work to implement), but in order to take advantage of several new SAP SuccessFactors functionalities, customers are going to have to move. So I’m pretty sure that SAP will get close to its goal of migrating by end of year (although possibly with a covid related bump in that progress).

“Our goal is to have customers migrated to SAP Cloud Identity Authentication (IAS and IPS) by the end of 2020. This is not a “forced” migration, we are just encouraging customers to migrate.  At some point all customers will need to be using this authentication method, that date is not yet determined. “

from SAP SuccessFactors Community, Platform Resources Blog

However, in going through the process to migrate some of my own systems, I’ve discovered a bit of a problem. I’ve tried to raise this multiple times, but so far, I just don’t seem to be able to clearly articulate why this problem is so important. So, I thought I’d try putting together a simple blog post with LOTS of pictures. Hopefully more pictures means easier to understand and then we can get somewhere!

Background – How does it work?

Okay, before diving into the deep end, it’s probably worth trying to articulate what on earth this SAP Cloud Identity Authentication is, and why I’m so happy that it’s coming.

Simple how to log on process

So, here’s how it works in general.

  1. Happy SuccessFactors user uses their computer to
  2. Access SuccessFactors website, which
  3. Redirects them to their own company’s identity provider (IDP) which asks them to log on (or possibly realises they are already signed in and does single sign on. Which then
  4. Sends them back to SuccessFactors having verified that they are a valid user and they can access their SuccessFactors system, so
  5. The Ecstatically Happy SuccessFactors user is even more happy!

Pretty simple really! There’s some technical stuff like SAML2 assertions and stuff that happens – but generally the experience and security are what matters.

Importantly, SAP SuccessFactors aren’t really aiming to change any of that experience, there just a small technical difference. In between steps 3 and 4 there’s going to be a little more going on under the hood.

Steps 3 and 4 of our previous diagram are replaced with 4 other steps. Quite simply SAP Cloud Identity Authentication Service (IAS) becomes a middleman to/from SuccessFactors and the Corporate IDP.

  1. SuccessFactors sends unauthenticated requests to IAS.
  2. IAS redirects request to IDP for authentication.
  3. IDP tells IAS that user is identified and correctly authenticated
  4. IAS tells SuccessFactors that user is identified and correctly authenticated.

There are a few benefits here. The SAP Cloud IAS is set up to be maintainable by customers. Whereas setting the details of a corporate IDP into SuccessFactors was/is restricted to SAP and certified partners, using the IAS is something that most customer’s technical teams should be able to manage. (I’ll caveat that with note that I expect that most customers will probably get their implementation partner to guide them through the initial setup. But at least it is something that customers can do, if they want to.) SAP SuccessFactors automagically set up the link between SAP SuccessFactors and the IAS.

There are many options to be able to configure the IAS to do some nice things, setting up rules for access, making login screens pretty, etc. It’s a good tool.

It is important, however, to understand that the IAS isn’t just a simple pass-through proxy service for authentication, it stores a list of all active users. A user MUST exist in the IAS in order to be able to log on to SuccessFactors.

So, what’s the problem?

Well so far everything seems hunky dory, no? Well, that’s probably because I’ve just talked about the productive use of the solution (which to be fair is what most people care about.) However, for many customer non-productive environments (and even some productive ones!) a corporate IDP and single sign on is not used.

Simple password-based access not using a corporate IDP
  1. Uses browser,
  2. Logs on with username/password (stored in SuccessFactors)
  3. Is happy

This scenario is updated also with IAS in the picture.

Can we just assume that you’re as tired of reading numbered lists as I am of typing them? No? Okay – just to be complete:

  1. User bounces to their laptop…
  2. Uses browser to access SuccessFactors
  3. Which redirects them to IAS which asks them for username and password (as no corporate IDP configured)
  4. IAS tells SuccessFactors that user successfully identified and authenticated.
  5. User has great time using SuccessFactors without SSO or a corporate IDP

There is some great functionality that can be leveraged here too! SAP Cloud IAS can implement multi-factor authentication – pretty damn good to have that available without having to use corporate IDP!

Sidebar – there is an IPS too

Yes, sorry more terminology! The way that the IAS works it needs to have a list of users provisioned into it. It doesn’t just use the list of employees that are in SuccessFactors, these need to be populated into the IAS.

  1. The IPS reads SuccessFactors regularly to find any changes to employee/user records
  2. It passes any details if finds to the IAS which updates its list of active users for SuccessFactors (and potentially any other system that authenticates against it).

Now that might sound like a retrograde step – it means that we need to provision (yep IPS stands for Identity Provisioning Service) users from SuccessFactors into IAS on a very regular basis. But, realistically, there are very few productive use scenarios that require a new employee to have access to the SuccessFactors system instantaneously, a delay of an hour is normally manageable. And there are ways to make that sync happen faster if needed.

Again, this seems all to be good news! There are a few minor niggles. For instance, the IAS must keep a list of usernames and email addresses in order to allow it to function. One additional restriction is that all users must have unique email addresses (you can’t set all the non-used users’ emails to dummy@dummy.com!) Again this doesn’t seem to be much of a problem. Until one considers that many customers have more than one non-productive system.

There can be only one!

I was so tempted to put a Highlander meme in there… But I didn’t because it might distract from the seriousness of this next bit – which seems to be the official SAP position at the moment.

” SAP is offering free [IAS] licenses to SuccessFactors customers for the purpose of logging-in to SF; they will receive one production and one test instance (by region).  “

SAP SuccessFactors IAS FAQ, highlighting and info in [brackets] added by me

SAP will only provide customers with one IAS tenant for their productive environment and one for their non-productive environments.

Why is this a concern?

This seems to be the bit that I’m having trouble explaining to people, so I hope the following diagrams and explanations help.

For many SuccessFactors Employee Central (Core HR) customers they have been provisioned by default with three instances. Customers can use them however they like, but I tend to advise people to do the below:

landscape of preview, non-prod and productive SF instances

Note that these systems are very often linked to payroll environments that reflect similar levels of detail to the productive environments. As such, the levels of control of access are very important. This is personal and private information and needs care and attention!

However, very often one of the systems will have anonymised data, or even just made up data, and be used for prototyping and building solutions. Generally in this system users may be given access to areas that they don’t normally have access to. Training and/or testing may take place. The key thing is that users may be running scenarios with different access levels than normal.

It’s important to note that whilst there is clearly an overlap in user records between all three systems, there are plenty of users in both non-productive environments that don’t overlap in normal usage of these systems.

Again, so why is this concerning?

Well, to reiterate, SAP have said they will only provide as part of customers’ existing subscriptions, two IAS instances. One for productive use and one for non-productive/test environments.

The non-productive IAS must provide authentication services for both non-prod environments. However, the feed that populates the list of users into the IAS can realistically only come from one of the two environments.

I’d better make QAS my source of user data then

Due to security concerns for data access, I must then make the source of user data for the IAS the QAS system, where I have locked down user access. Otherwise, if the source of the user data was the test system, then a test user could update a user in the test system records to have details that could allow themselves to log on to the QAS environment and see data that they were not supposed to see. That can’t be allowed!

Real email addresses at a premium for testing, double handling.

It will be quite some hassle to maintain all the Test system users in the QAS system, but it could be done. However, it may be very difficult to test scenarios that rely on real email addresses. Because the email address must be unique in the IAS, it won’t be possible to swap emails around easily. This becomes especially problematic when email address is used to authenticate users to downstream systems (particular cases where I have seen this occur are SSO into SAP ERP systems, where email address is used as the unique identifier.) Changing an email address to allow testing of a process may mean creating a new employee in the QAS system and reassigning email addresses there in addition to updating the test system.

But hold on, what about regression testing?

However, during the 6 monthly release cycle, there is a need to test all the new SAP SuccessFactors functionality. This needs to be done in the preview environment. So perhaps during this period I lock down all access to the preview environment, make user access as per the QAS system? Then I switch my IAS source to be the preview environment? This has some problems associated with it too. It really restricts how I can carry out my regression testing and who I can use to do it. But sure as a very sure thing, I will want to test out user provisioning and definitely want to check that the configuration I’m using to populate the user list into my IAS from SuccessFactors hasn’t been impacted by the half yearly release.

So where does this leave us?

Very soon, with the release of SuccessFactors embedded analytics (People Analytics), more and more customers are going to want/need to implement SAP Cloud IAS within their SuccessFactors environments. I would imagine that due to the overheads that I described that many customers would opt to purchase an additional subscription for another non-productive IAS instance. If I think of the cost/overhead of:

  • maintaining all non-productive users in pre-prod (imagine having to hire someone in the pre-prod environment just so you could test a hire in the test environment!)
  • Separating out test system users/data from pre-prod environment to ensure regression testing payroll changes.
  • Migrating IAS and IPS (tool used to synchronise user record to IAS) configuration from one system to another to enable release testing, then flipping back if urgent production support testing needs to occur.
  • Restricting release regression testing to only those users that have pre-production access and only to their regular roles.

Then I think I can see the cost benefit of purchasing an additional IAS system to handle that. But I really don’t want to be forced into paying additional subscription to handle a scenario that works just fine right now.

It seems that the balance has shifted and not it a good way for customers.

Given that when I raised this point in SuccessFactors community site I was told that:

…By design, you don’t need a 1 to 1 relationship between SF instances/tenants and IAS.  

IAS is designed like any other Identity Management product to handle logins for many different systems. Customers don’t buy a new copy of Ping Identity or Microsoft Azure for every application they use it for.  Your IAS configuration will control which SF instance users log into. 

Additionally, having the 1:many approach makes it a lot easier to manager [sic]. If you had multiple copies of IAS/IPS you would need to figure what is connect to what every time you want to manage anything. When you re-use the same IAS for many applications, you only have to configure one time and then re-use them.  ie…Password policy settings, Corporate Identity Provider connections etc. The corporate IDP is a huge one since you have to work with your internal SSO folks any time you change anything there. 

I think there is a disconnect between how some SuccessFactors product managers think customers are using their product and how it is being used. Many customers do not use SSO into their non-productive environment by design!

Next steps

I really hope that by spending the time to put this post together I can raise some awareness of this problem before it becomes a bigger issue for customers. Ideally, I’d love SAP SuccessFactors to re-evaluate their stance on providing only one IAS instance for all non-productive SuccessFactors instances. It should clearly be (in my not so humble opinion) one IAS per SuccessFactors instance. (on a technical note, happy for just one non-prod IPS). Let’s see. If you’re reading this and you have some influence with SAP SuccessFactors it would be great if you let me know what you think and perhaps let others know too.

Last thought on solution parity

Finally let me leave you with one last diagram/thought…

I love working with the SAP SuccessFactors software and I think that the IAS is some great functionality. However, no new functionality that replaces existing functionality that a customer has should ever require them to purchase additional subscription to retain parity with their existing solution.

P.S. there’s a follow up post to this one to clarify a few things – How to break a shared authentication solution where I put my evil hacker hat on and give and example of why this could be very ugly for an organisation.

On ABAP in the Cloud

ABAP in the cloud

Michael Koch kicked it all off with a tweet,

to which of course I had to reply:

then I was prodded:

and prodded:

and then James beat me to the blog:

and if you haven’t read James’ post, please do, it is excellent.

So whilst I’m waiting to hear how much it’s going to cost me to fix my car of which the engine has decided to stop working whilst on the way to work today, I thought that rather than drinking a bottle of Pinot Gris and attempting to forget about the shitty waste of a day I’ve had, I’d do something useful, productive (this post), and drink beetroot, apple, ginger and celery juice instead.

So here are thoughts upon which I will rant.

  • ABAP is a proprietary language which make its code costly to support.
  • Building for cloud is far more than just supporting cloud systems.
  • If you love ABAP to the exclusion of everything else that’s your bed, you lay in it. I like beetroot juice, I am so going to have pink pee later.
  • Java is the boring enterprise language of choice.
  • A PaaS really should be language agnostic, if not it’s a pretty crappy PaaS.
  • Why on earth have we ended up here? Who is paying for this?
  • Evolve or die.

These are all going to get intermixed in this rant, but I will still try to address them one by one.

Firstly, on the joys of ABAPers. I have discussed and even written about this, and it may just be the particular markets where I play, but it’s damn hard to find a good and excited ABAPer. People don’t learn the language unless they want to work on SAP products. Imagine how quickly that strips out the fun people. But where people have got good ABAP skills, they tend to have far more than that, also great business process understanding (Robbo has recently written about this https://blogs.sap.com/2017/10/02/abap-in-sap-cloud-platform-why/ ) Have a read, especially if you fall into the ABAP diehards camp, it will make you feel much happier than this blog post will.

But because the good ABAP folk have such great depth of business process understanding, they command a reasonable rate – and why not having a BA and a coder in one is a bit of a win is it not? So they are expensive. One hopes because they deliver better, but I find this is not true. They just cost more. But you have to have them to support the huge monolith that is your SAP ERP system. So embedded in companies around the world are these folk who can code ABAP, understand their systems and are if not well paid, expensive to have hanging around.

And you won’t find someone off the street who has just learnt ABAP who is useful, because the skill in ABAP isn’t in the language, it’s in understanding the existing library of  standard code and frameworks that you can use to get things done.

FFS the language still doesn’t have the concept of a Boolean!

The requirement for ABAP support is one of the reasons that SAP costs a decent amount to run. In the future as we move to S/4HANA public cloud (and we will, slowly but inevitably) cost saving will be essential. ABAP costs, so get rid of it in the equation. Out-source your custom development, even better, purchase it as SaaS from someone else, are you a custom software development house? No – they why do you try to build your own software? Concentrate on dishwasher powder, chocolate bars, beer or whatever it is you have as your core.

If we start building cloud extensions in ABAP we are locking down the list of people who could support them. This will cost us extra. Having worked with SaaS for the last few years, I can clearly state, cost of delivery is far more important now than it ever was on-prem. The expectations of customers are different. They will not pay the same amount to build an extension as they paid for the SaaS solution it enhances. ABAP ain’t cheap, and neither are ABAPers.

I don’t think ABAP and it’s whole lifecycle management is really well designed to build cloud apps. James mentioned some great points in his blog around dependency management, and how ABAP doesn’t support non-linear and project based development (hopefully ABAPGit will help here, the official voice of support from SAP is very encouraging.) But having spent the last 5 years build cloud apps that integrate to SAP systems, I have been so impressed by the huge amount of standard tooling and functionality that is available for projects outside of SAP. Like have you used Maven? It’s fricking awesome! To consider even thinking about managing the huge number of libraries that I use in most of my builds to do without this tooling would be unthinkable. Since James was probably more detailed and eloquent on this point I will stop there. But really, even if SAP support ABAPGit there is a hell of a long way to go to even think of being put into an imaginary cloud development language magic quadrant chart, let alone featuring anywhere but bottom left.

#ABAPisntDead. No of course it isn’t, there will be legacy on prem apps that will run and people will make businesses out of it, like those Rimini Street folk. But if you can’t see anything out there other than ABAP, my goodness you are short sighted. Any good programmer out there should be able to code in js (server side or browser), and should have a grasp of at least 2 other languages. If you can only deal with one, you’re not a programmer, you’re a liability for the people you work with. Having multiple skills is important, and it’s also important to know when to use them. Enlighten yourselves people, there is a whole world full of cool shite out there, go and have a look. If my post infuriates you because you believe that ABAP is the best thing ever, awesome, both for you and for me, because you have passion, go and use it, and me because it means I actually got some people who don’t agree with me to read this.

Java is boring, and safe, and commodity. And that is exactly what businesses love. You want something that is reliable, has been proven, does the job. Moreover, you want bucket loads of libraries that other people have built and tested that can do the things you want to do. Whilst I built an implementation of TFA that was compatible with Google’s TFA Authenticator app in ABAP, it was a pain in the arse, and hasn’t been updated since I wrote it and then worried about releasing it as open source because you weren’t allowed to do that with ABAP. There’s a standard lib for Java. Standard boring languages are the bedrock of good enterprise builds. I do like to play with server side js, (aka Node) but i’m still a sucker for strongly typed languages.

But if you don’t like Java, then awesome, choose something else. Indeed it should not matter what you choose, because any PaaS you build on should be language agnostic when it comes to providing services to you to consume. If you’re not consuming any services from your PaaS then you missed the memo about cloud development, please go back to your application server. A PaaS offers micro-services that should be able to be consumed by any application running on that platform. This inherently makes those services consumable in a fashion that is hard to use for ABAP and pretty standard for every other language. I’m sure that SAP could wrap their services into a consumable layer that would be easier to use in the Cloud based ABAP. But this then means we start losing one of the best bits of the PaaS, that it shouldn’t favour any runtime. We’ll see how this story plays out…

Which kinda segues into my next worry/rant/observation. How did we get here that a language that really isn’t suited to cloud extension ends up as an officially supported run time in SAP’s CF PaaS? This goes back to my original tweet.

I believe that it is clearly SAP’s strategy to move to the largest part of their revenue coming from public cloud based SaaS solutions (including ERP). Btw, I think this is a sound strategic vision, because if they don’t pivot to get there, someone else will take that space. The on-prem model will not make as much money in the future, todays small companies are tomorrows giants, and with SaaS solutions they don’t need to migrate/upscale, they will keep the solution they buy today. SAP needs to be in that space, and they need credibility that comes from large customers being there too.

To this end I envisage SAP have been discussing moving some very influential customers to the public cloud. Those customer, I would guess, have responded that they don’t want to loose their current people or custom build investments.

The obvious solution from SAP is to put together an ABAP cloud runtime. I cannot be cheap to do this though. The effort to make ABAP into a secure and lightweight containerizable solution will not be something that a team will do in a week or two. There must be some sound and solid business reasons to do this. For all the reasons I have previously mentioned I believe that if companies want to extend SAP SaaS solutions, they should think about using other languages, not ABAP. But I fear this is not about making a better solution, it is about making a marketable one. If customers believe that they can extend the value of their existing investments and also benefit from moving to SaaS based solution, that is a great sales pitch. It’s having your cake and eating it.

This vision (even if it doesn’t work out to be the reality) of a simple gateway to moving to SaaS ERP is what I believe we are now being sold. This isn’t a story for developers, this is a story for the high level execs that sign the S/4HANA subscriptions.

I hope that a cloud based ABAP will be the gateway that enables some organisations to get off the on-premise mode and head to the cloud. What I fully expect is that once they are there, they will realise that there are better and more supportable ways to extend. That would be great. In the meantime I fear that we start bringing non-cloudy ways of working into the cloud landscape, this will likely cause failed/cost overrun projects. We run the risk of preferring Cloud ABAP as a way to interact with S/4HANA cloud, that would be disastrous.

It has been suggested that Cloud ABAP will potentially be the solution that encourages adoption of the SAP Cloud Platform. I just hope it isn’t the solution that kills it. I would much rather the money being spent of putting ABAP into the cloud is used to handle some of the other issues I see with SAP CP, but clearly there is a view that it will be a return on investment.

Then again, if you’re not trying new stuff and making mistakes, you’re not learning. If you’re not learning, you’re falling behind. So here’s to making mistakes and learning! To steal the excellent closing lines from James’ post:

So buckle up because there’s no turning back at this point. It’s either evolve or die.

I look forward to a lively debate on this topic.

(James Wood – https://blogs.sap.com/2017/10/04/abap-in-the-cloud-is-this-a-good-thing/)

James, I couldn’t say it better mate. Although I would refer to the platform as SAP CP 😉

I think SAP Cloud Platform is and will be a key part of the story of SAP’s  and customers’ evolution to the cloud. If it takes putting an “runs ABAP” badge on it, to get people to see how useful it is, I’ll deal with it. But for sure, it would not be my recommendation to any organisation that it would be best practice. I’ll keep an open mind, perhaps it will be one day, if so I’ll adapt and evolve – because that’s what you should do.

As always, my own thoughts, not my company’s,  please feel free to jump onto SCN and reply to James’ post. I’ll probably read those comments as well as whatever gets posted on twitter.

 

Further update on SAP Gateway CSRF token farce

So an update on recent rant about CSRF protection that isn’t needed on SAP Gateway.

The folks in the very attentive HCI team have just added functionality into their solution, so if you configure an OData call to an onPrem system via SAP HANA Cloud Connector, it will automatically do the GET with a fetch for the CSRF token for you whenever you configure a data update operation.

That’s kinda cool, but all it does is sweep the offending rubbish under the rug.

https://www.flickr.com/photos/bruce_krasting/7695348682 - Sweep under the rug, credit Bruce Krasting

https://www.flickr.com/photos/bruce_krasting/7695348682 – Sweep under the rug, credit Bruce Krasting

So now we have logic built into an integration platform that is needlessly slowing our integration flow because of a superfluous system requirement. An extra round trip for no reason.

In this case it is truly superfluous, because the original PUT that I was using had the user credentials as part of the header. That alone should make the CSRF token not required.

What this does show, is how SAP Cloud solutions like SAP HCI are able to update and fix stuff far faster than their onPrem partners. Even if it is a work-around to a problem that shouldn’t exist.

Security in depth – or a bug waiting to happen? – CSRF protection on SAP Gateway

What's that - It's the dragon that guards the locked door, we feed people who ask silly security questions to it

What’s that? – It’s the dragon that guards the locked door, we feed people who ask silly security questions to it.

<rant>

So I’ve got my knickers in a twist again. Recently I was playing around with sending some OData to my SAP server when it refused me. Now, I didn’t like that, but at least it was kind enough to tell me why. Apparently I hadn’t fed it a CSRF token. OK, so I looked in the headers of the GET that did work, and lo and behold there was a CSRF token there. I fed that into the POST I was doing, and bingo it worked.

Now it seems to me that many many people have hit the same thing and found the same solution. Indeed, I asked around some people I knew and they told me: “Get over it Chris, it’s in the header of your GET, it lasts all session, just use it!” But me being me, no, I wouldn’t accept that!

Slight aside – they also mentioned “Damnit, I remember when that patch came in, it buggered up my custom Gateway app and I had no warning that it was coming, took me ages to figure out why it wasn’t working.”

 

So I thought – OK? Why? Why do we have CSRF protection in the first place, what on earth is it?

CSRF protection – Cross Site Request Forgery protection, according to the websites I read is supposed to protect against the case where unknown to a user a cookie in the browser used for authentication allows a malicious site to alter data on your system. (And in the case of gateway, your SAP system).

So to send a PUT or POST or DELETE (the verbs that can change data) from a browser without user knowing is going to involve 1 of 2 things.

a) An injection of HTML on the page adds either a form that is going to POST some data (typical type of attack  CSRF protects against) or a link e.g. img tag which GETs data.

b) An injection of some script, e.g. JS on page that is going to do the PUT/POST/DELETE

In the case of (a – POST) the payload will be malformed and Gateway isn’t going to accept that as valid OData – so no security worries anyway. And for (a – GET) CSRF protection isn’t even applied.

In the case of (b) well if I can embed JS, I can just as easily embed a GET pull the header and then do an update with the CSRF token. Indeed the sites that advocate for the CSRF token approach make it clear that it cannot protect you in the case you have malicious Javascript.

In the case that the script is running on a page from a different domain, then CORS will kick in and stop the access – but if somehow the injection is on my own domain, I don’t see how we’re protected.

So I was at a loss. What protection does CSRF actually offer Gateway?

I further researched:

There’s a great explanation, which does better than I have at:

Play Framework

It is recommended that you familiarise yourself with CSRF, what the attack vectors are, and what the attack vectors are not. We recommend starting withthis information from OWASP.

Simply put, an attacker can coerce a victims browser to make the following types of requests:

  • All GET requests
  • POST requests with bodies of type application/x-www-form-urlencoded,multipart/form-data and text/plain

An attacker can not:

  • Coerce the browser to use other request methods such as PUT and DELETE
  • Coerce the browser to post other content types, such asapplication/json
  • Coerce the browser to send new cookies, other than those that the server has already set
  • Coerce the browser to set arbitrary headers, other than the normal headers the browser adds to requests

Since GET requests are not meant to be mutative, there is no danger to an application that follows this best practice. So the only requests that need CSRF protection arePOST requests with the above mentioned content types.

Since Gateway does not support POST requests with bodies of type application/x-www-form-urlencoded,multipart/form-data and text/plain (or if it does there’s your problem right there!) there is no need for CSRF protection.

I then had a fun conversation on Twitter with Ethan

The great thing about chatting with Ethan is you always come out having learnt something.

He makes a good point, and I’ll paraphrase him:

“The best security is deep and many layered and protects not only against the things that you know may happen, but also against those that you’re pretty sure won’t.”

I was wrong –  “to send a PUT or POST or DELETE (the verbs that can change data) from a browser without user knowing is going to involve 1 of 2 3 things. With the third being:

An exploitation of a hitherto unknown browser bug that allows it.

So now I’m confused. Is it worthwhile implementing the hassle that is CSRF protection, including the potential slowdown in speed of response from the solution (a paramount concern in a mobile app) for a situation that might happen.

When I’m writing ABAP code, I’m happy to trade away performance of the code for ease of maintenance. I don’t use pointers (field symbols) to loop over data that I do not intend to change, because some fool could come along later and accidentally do just that. If I instead use a work area, there isn’t that risk.

So in some respects I already do work that makes the solution slower to ensure lower risk, so shouldn’t I just do the CSRF thingy?

However, it is the reason for the risk – I don’t trust that the people maintaining the code after I leave will understand what I have done in my implementation of CSRF protection and won’t make a mistake. Even if I’m using UI5 in my application to update my SAP system, will they remember to call the refreshSecurityToken method every time before a PUT, POST or DELETE? Will they test it? Will they let the session expire in the testing so that they actually need to call the refreshSecurityToken method? I really hope so, but I doubt it. I see applications going into error and data not being updated when it should have been, because of “needless” CSRF protection.

weighing Dodgy Code vs Browser Bug risks

weighing Dodgy Code vs Browser Bug risks

So what I see is this: Security in enterprise is paramount, Gateway is enterprise software, it needs to be secure. So SAP made it so, even if it hasn’t really made a big difference or fixed any known security holes. But, “just in case”. However, custom code (and even standard code 😉 ) will have bugs, ones that rely on sessions timing out are particularly hard to test and will get through. The risk to your Gateway based mobile app is greater by having CSRF protection enabled than it is to your data being maliciously hacked through zero-day exploits. But I guess it depends on what that data is 🙂 .

</rant>

OK, one final bit…

<rant>

Given that I might not actually be using my Gateway for a UI app but for machine to machine transactions, would it PLEASE be possible that if I provide a valid authentication header in the PUT/POST/DELETE that we ignore the CSRF thingy? If I can somehow come up with a valid auth header, then we aren’t protecting anything with a CSRF token, we’re just making transactions slower by requiring multiple round trips that shouldn’t be needed.

</rant>

I feel better now. 🙂

 

Read how this discussion unfolds over at SCN…

http://scn.sap.com/community/gateway/blog/2014/08/26/gateway-protection-against-cross-site-request-forgery-attacks#comment-611490

P.S. my last post from SCN comment thread as I think it’s an important summary:

The thing is, by not implementing CSRF protection, we aren’t making our services insecure. There are no known ways to use CSRF against Gateway currently.

There is the case of protection against unknown attacks, but is that worth the cost, risk, effort?

Not using CSRF protection does not mean you are making your service insecure. It just trading “just in case” against real life complexity, risk and cost.

Depending on the data concerned, that “just in case” might be worth it. It won’t always be.

Architects have a responsibility to their companies to balance these risks and decide. We have the responsibility to inform them clearly and not just pretend that security is the only and overwhelming factor to consider.

Sometimes we put security on a pedestal and everything has to be done to address it. But we should remember that everything should have a risk/reward curve and sometimes NOT coding for a security risk is actually less risk than coding for it.

 

 

Intangibles, appreciating your employees motivates, performance ratings processes don’t

Sorry, here I go again. I just read Steve Hunt’s post: http://www.tlnt.com/2014/08/04/performance-management-we-wont-fix-the-problem-by-ignoring-it/

And of course I’m all worked up. Why? Two reasons.

Firstly, I strongly disagree on the premise that performance management actually achieves improvements for the employees that are being “managed”. This is using Steve Hunt’s own definition of performance management:

Standardized and defined processes used to communicate job expectations to employees, evaluate employees against those expectations, and utilize these evaluations to guide talent management decisions related to compensation, staffing and development.

This has nothing to do with motivating and improving employees. It’s all about figuring out what is the smallest amount you can get away with paying your staff.

A process that can actually help employees improve is by working with them to find out their interests, find out what they want to do and shape their work around that. This isn’t the world of Gen-X and Boomers any more. People are far more interested in making work part of their life and life part of their work. Will they do that if there is a regimented process that is going to measure them against the cookie cutter mould? No, they won’t. Because no employee is exactly alike and no employer that wants to get the best out of their employees is going to manage that by trying to shape an employee to the employers expectation. We need instead to understand the great whole of the employee’s values and use that to motivate them. An employee that is doing what they feel is valuable and feels that the company supports them in this is far more likely to perform well than one that does not.

We have the tools (in a creepy big brother kinda way) to be able to analyse far more than just our employee’s achievement of our stated corporate goals, but also the interests, engagements, networks and influences of our employees. By better understanding our employees, and then aligning our business goals with their goals, we stand so much more chance of motivating and retaining talent.

Remunerate at the market rate for the skills that the employee possesses, if they gain more skills then pay more. Or if those skills have nothing to do with your business, don’t try and hold on to someone who would be happier elsewhere. Likewise, if the desires of the employee do not align with your corporate goals, don’t attempt to force the employee to comply, you are both better off without each other. Have the frank discussion that their desires and your goals don’t align at all. If their goal is to sit and eat chocolate and drink coffee all day and you don’t have a coffee and chocolate tasting role in your company, then it’s probably not going to work out. But it is good to know this – it’s time to move this employee on. Not because they don’t do what they are supposed to do, but because they have no desire to be doing it. Be frank, you can’t get rid of them if they are doing a reasonable job, but they will never be stellar unless _they_ want to do the work.

Now, I’m sure that this approach isn’t going to work in many, if not most, industries. If you have a load of jobs that people will only do if they are paid enough to suffer through, then this approach will not work. In this case fall back on Steve’s approach, just realise you’re very unlikely to develop or retain any talent.

However, if you are in an industry where people (or at least some of them) work because they love doing the work and are enthused about being the best, then I think my approach has some real advantages. Of course you will get and hire bad apples. This is where I believe performance management comes in. You now attempt to manage that person out of the company and ensure that you are not at legal risk by following a clear process. I’m sure there are risks in only performance managing those you’d rather have leave the company, but there are certainly rewards too.

And now to my second point of why I’m unhappy with this article. It was written by someone with the job title Senior Vice President of Customer Value at SuccessFactors/SAP Cloud HCM

If this is what SuccessFactors believes will drive more customer value, then I’m very worried that innovative and alternative approaches to making talent management work are not likely to get a great reception.

I strongly agree with Steve that we need to find out and measure how well our people are doing, but that does not need to be against a defined set of company goals, but against an slightly less well defined set of individual personal goals that the company can hopefully align with and benefit from. I believe that the next step for talent management solutions like SuccessFactors is to help employers with the analysis of who their employees are and what they want. Then use that information to help align both the business’s needs and the employee’s desires. It’s a huge technical challenge but we have to start somewhere. By at least acknowledging that there might be better ways of doing things rather than just dismissing them, we’d be making a first step in the right direction.

Companies that start to embrace the holistic view of the employee rather than the company centric one will, I believe, start to reap the rewards.

I could well be just dreaming, but at least I’ll be dreaming with some of the most motivated and enthusiastic people around who are all trying to achieve their goals in my company.