Category Archives: Coding

What is “table stakes” for API functionality?

What is “table stakes” for product development these days? Do you believe that we should be following an “API first” development strategy?

I recently (well it was a year or so ago) came across some cool SAP SuccessFactors functionality which allows for a person to be put temporarily into another position, returning to their substantive position after a fixed period. Many of you will know this as “Higher Duties” or “Temporary Assignment” functionality. The problem I found (more recently) was that whilst most of the information about the temporary assignment was available via API, one key piece of information (very important information), the end date of the assignment, was NOT available via the standard SAP SuccessFactors OData API. ☹

Surprisingly, and luckily, the information is available via the old SFAPI (a SOAP based API, and we all know how much developers love SOAP!) So I can work around this for the moment.

This said, to my thoughts, new(ish) functionality, like HD/TA for SAP SuccessFactors should always allow for the data that can be seen in the frontend application to be available via the OData API. That should be “table stakes”. A modern HRIS must be supported by a secure, open, documented and complete API. I often ask potential customers during sales presentations, to compare SAP SuccessFactors with the other products they are considering to Google “name of HRIS solution” + “API” and see what is returned, whether they need to sign in to see it, are there supporting document, examples, etc. SAP SuccessFactors pretty much always wins this game (mainly because Microsoft and Google don’t have competing products in the space, because they use SAP SuccessFactors.)

SAP has come a long way in making its products somewhat developer friendly, so it really irks me when I’m told “We do not have this on our roadmap at this point in time, please raise an influence request”. So I did – https://influence.sap.com/sap/ino/#/idea/308298

If you’re an SAP customer and you manage to read this far into this post, please do me a solid and log in to vote for my influence request! Thank you! Whilst you’re there, have a look around there are loads of requests that would make so much sense to implement, vote for them too.

Customer voice is so important – and this “influence.sap.com” space is one area where we as customers actually get a little bit of a say!

OAuth Client Credential Token APIs

So I’ll probably get hugely humiliated by writing this post – but then again, how do we learn without failing…

I today had the chance to watch DJ Adams run through one of his #HandsOnSAPDev live streams: (it start’s 3mins 30secs in if you want to watch)

It was an interesting watch and makes me long for those days when I could just play with APIs marked beta and hope to get away with not having to do a massive code re-write several months later.

Anyway – during the episode, DJ was using the “standard” OAuth Client Credentials flow and I had to ask myself – “Why is this any more secure than just using basic authentication?”

I tried to put the point to DJ – but there’s only so much one can do in a chat window, hence this post.

I have since tried to do a little research on the topic – and found this rather good Stack Exchange discussion on the topic:

https://softwareengineering.stackexchange.com/a/297997/369892

Both the question and answer made a lot of sense – read if you like – but I’ll work through the points – in my own style – which is to illustrate with bad drawings.

OAuth Client Credential Flow explained with bad drawings

In the OAuth Client Credentials flow – one system (Bob, our client) gives another system (Dave, our authorisation server) his special secret key.

Bob uses his secret key to authenticate himself to the authorisation server. In the example DJ worked through authentication was in form of an authorisation header with <clientid>:<clientsecret>. (And a body that contained the user’s username and password – this is useful for API’s that need to pickup a given user’s credential.) Note that nothing here is encrypted beyond the usual transport encryption. I’ve seen many implementation of this process where the username isn’t actually needed because the particular client id and secret are associated with a particular system user. (I’ve never seen any other one where the user’s password is needed – noting these API’s are beta!).

Once Dave (our authorisation server) gets our secret, he checks it is still valid (kinda like checking a password… no wait, exactly like checking a password) and then gives us a limited lifetime token to use the API. In the example DJ worked through it also checked the username and password – strange, but hey why not? (Other than it being a TERRIBLE idea that any server should need to store a user’s password as well as client id and secret!)

Now, according to the OAuth standards, Bob could have asked for the token he picked up to be scoped to only allow certain access. But because Bob is a little bit lazy and Dave doesn’t insist that he asks for scope, Bob never does. If you got to the oauth.com website and check out the client credential flow (https://www.oauth.com/oauth2-servers/access-tokens/client-credentials/_ you’ll even see they mention that hardly anyone ever uses scope in the flow.

Your service can support different scopes for the client credentials grant. In practice, not many services actually support this.

OAuth.com

Plus Bob likes reducing his interactions with Dave to make things faster, so one token to rule them all is far easier and more generic. Bob might be a programmer (if he weren’t a system… stick with the metaphors people!)

Bob now has his limited lifetime access token he can use to authorise the API interactions. So he goes to make call to the API server.

Imagine Bob’s surprise when talking to the API server it looks very very much like Dave. But it’s not Dave, it’s Fred the API server. In DJ’s example the authentication server was accountblah.authentication.region.hana.ondemand.com and the API server was accounts-service.cfapps.region.hana.ondemand.com. Slightly different names – and actually resolve to different IP addresses too! But if I look at the SuccessFactors implementation of this similar token logic – both sit on same server (from an external view – who knows or cares what happens internally). Anyway – Bob uses his token to request some data from Fred.

Fred then goes away and checks that the token is valid. When the token is sent over to Fred, it’s not encrypted in any way or signed with a special key or anything. In DJ’s example it was just in the Authorisation header as “Bearer <access token>”. The security of this exchange was relying on the transport encryption – just like the original request to get the access token.

Fred may well be wondering if Bob is ever going to send him a request he doesn’t have scope for, might need to have a chat to Dave about that… But he validates that Bob has a token that is still valid and that is valid for the requested action (get list of sub-accounts for example.)

So what makes this more secure?

In the exchange I just documented, I cannot see how taking the extra step to pick up an access token to call Fred has made the exchange between Bob and Fred more secure… The only things I can think of are:

  • Conversations to Dave (the authentication server) are treated more seriously, we take extra special care to not record them or allow anyone to snoop on them because the client secret is long lived (like a password).
  • Possibly means that if we take less care that conversations with Fred leak then the impact will be short lived due to the token expiring sooner.
  • yep – that’s about it.
  • can’t think of anything else.

Frankly – for the increase in hassle I’m not seeing an ROI on securing my API calls. Especially as for many implementations of this sort of logic the token API runs on same server as the main application API. (Dave == Fred)

What makes this useful then?

This is different thing – and goes to identity rather than authorisation. With the client credential approach I can config my calls to the API server to be treated as if one of the system users is making the call and not a generic API user. I have one password that I use to get access tokens that allow me to “pretend” to be any user I want to be for the purpose of fetching/updating data via the API. This is something that I would use all the time in SuccessFactors* – it lets me query data using the user’s permissions. Very useful! SAPCP is set up to do this. I believe this is how, for example applications running on SAPCP can use OAuth Bearer destinations to access API calls as per the logged in user – even though the user is not logged in to that remote application. We can’t do lots of client side SSO, because browsers have gotten wise to applications doing SSO to remote system inside frames (SSO to a remote server requires JS running on different domains generally and falls foul of Single Origin restrictions). So solutions like SAP Cloud Portal and now SAP WorkZone use the SAPCP destination service to call OAuth Client Credential flows to get access tokens as per the person that is logged into their solution. Obviously this requires trust set up – which is the the client id and secret.

So Claire (an actual person, not a system for once) comes along and asks Bob (the system) for some data that’s on the system that is Fred (are you lost yet?). Bob asks Dave for an access token that will allow him to ask Fred for stuff on behalf of Claire. Dave, being ever so obliging and having verified the client id and secret, then gives Bob a temp pass to pretend to be Claire when speaking to Fred.

Okay if you’re following that, you’re doing well. But really – that’s why we use these credentials. It has nothing to do with security and everything to do with making life easier for system to system communication when pretending to be other people.

* Now to explain that aster a bit up the page. I would possibly use this logic in SuccessFactors, but I don’t because it requires that the user that “Bob” is pretending to be needs to have API access or Fred refuses the call. Giving all users API access is not a good idea at the moment in SuccessFactors because of the way that certain fields tend not to be hidden or controlled in API access compared to front end access.

Summary

So, to summarise, I cannot see any real security benefit to using OAuth Client Credential flows over Basic Auth unless you are looking to distribute your development spend on making one area of your codebase more secure than others. Even then it’s not that much better. If you’re able to intercept and abuse a basic authentication flow, you’d be just as likely to be able to intercept and abuse an OAuth Client Credential flow. Indeed because organisations tend to use the Client Credential flow as per the example DJ had (with the credential applying to a given user) or like SuccessFactors does, it actually open up a whole new security issue… It’s not just one “user” that might have their credentials breached, it’s anyone that the system is allowed to impersonate.

Okay – go at me – I’ve missed something – else we wouldn’t be using OAuth Client Credentials for the sorts of API calls that DJ was making in the video.

I note there are many other OAuth flows and some of them are much more secure – they use PPK encryption to ensure that messages are signed and headers never could leak credentials like Basic Authentication can. But the client credential flow – hmmm this one, in the use case where it’s a single user, not “impersonating” anyone – there’s no benefit over Basic Auth and another communication round trip to have to deal with.

On ABAP in the Cloud

ABAP in the cloud

Michael Koch kicked it all off with a tweet,

to which of course I had to reply:

then I was prodded:

and prodded:

and then James beat me to the blog:

and if you haven’t read James’ post, please do, it is excellent.

So whilst I’m waiting to hear how much it’s going to cost me to fix my car of which the engine has decided to stop working whilst on the way to work today, I thought that rather than drinking a bottle of Pinot Gris and attempting to forget about the shitty waste of a day I’ve had, I’d do something useful, productive (this post), and drink beetroot, apple, ginger and celery juice instead.

So here are thoughts upon which I will rant.

  • ABAP is a proprietary language which make its code costly to support.
  • Building for cloud is far more than just supporting cloud systems.
  • If you love ABAP to the exclusion of everything else that’s your bed, you lay in it. I like beetroot juice, I am so going to have pink pee later.
  • Java is the boring enterprise language of choice.
  • A PaaS really should be language agnostic, if not it’s a pretty crappy PaaS.
  • Why on earth have we ended up here? Who is paying for this?
  • Evolve or die.

These are all going to get intermixed in this rant, but I will still try to address them one by one.

Firstly, on the joys of ABAPers. I have discussed and even written about this, and it may just be the particular markets where I play, but it’s damn hard to find a good and excited ABAPer. People don’t learn the language unless they want to work on SAP products. Imagine how quickly that strips out the fun people. But where people have got good ABAP skills, they tend to have far more than that, also great business process understanding (Robbo has recently written about this https://blogs.sap.com/2017/10/02/abap-in-sap-cloud-platform-why/ ) Have a read, especially if you fall into the ABAP diehards camp, it will make you feel much happier than this blog post will.

But because the good ABAP folk have such great depth of business process understanding, they command a reasonable rate – and why not having a BA and a coder in one is a bit of a win is it not? So they are expensive. One hopes because they deliver better, but I find this is not true. They just cost more. But you have to have them to support the huge monolith that is your SAP ERP system. So embedded in companies around the world are these folk who can code ABAP, understand their systems and are if not well paid, expensive to have hanging around.

And you won’t find someone off the street who has just learnt ABAP who is useful, because the skill in ABAP isn’t in the language, it’s in understanding the existing library of  standard code and frameworks that you can use to get things done.

FFS the language still doesn’t have the concept of a Boolean!

The requirement for ABAP support is one of the reasons that SAP costs a decent amount to run. In the future as we move to S/4HANA public cloud (and we will, slowly but inevitably) cost saving will be essential. ABAP costs, so get rid of it in the equation. Out-source your custom development, even better, purchase it as SaaS from someone else, are you a custom software development house? No – they why do you try to build your own software? Concentrate on dishwasher powder, chocolate bars, beer or whatever it is you have as your core.

If we start building cloud extensions in ABAP we are locking down the list of people who could support them. This will cost us extra. Having worked with SaaS for the last few years, I can clearly state, cost of delivery is far more important now than it ever was on-prem. The expectations of customers are different. They will not pay the same amount to build an extension as they paid for the SaaS solution it enhances. ABAP ain’t cheap, and neither are ABAPers.

I don’t think ABAP and it’s whole lifecycle management is really well designed to build cloud apps. James mentioned some great points in his blog around dependency management, and how ABAP doesn’t support non-linear and project based development (hopefully ABAPGit will help here, the official voice of support from SAP is very encouraging.) But having spent the last 5 years build cloud apps that integrate to SAP systems, I have been so impressed by the huge amount of standard tooling and functionality that is available for projects outside of SAP. Like have you used Maven? It’s fricking awesome! To consider even thinking about managing the huge number of libraries that I use in most of my builds to do without this tooling would be unthinkable. Since James was probably more detailed and eloquent on this point I will stop there. But really, even if SAP support ABAPGit there is a hell of a long way to go to even think of being put into an imaginary cloud development language magic quadrant chart, let alone featuring anywhere but bottom left.

#ABAPisntDead. No of course it isn’t, there will be legacy on prem apps that will run and people will make businesses out of it, like those Rimini Street folk. But if you can’t see anything out there other than ABAP, my goodness you are short sighted. Any good programmer out there should be able to code in js (server side or browser), and should have a grasp of at least 2 other languages. If you can only deal with one, you’re not a programmer, you’re a liability for the people you work with. Having multiple skills is important, and it’s also important to know when to use them. Enlighten yourselves people, there is a whole world full of cool shite out there, go and have a look. If my post infuriates you because you believe that ABAP is the best thing ever, awesome, both for you and for me, because you have passion, go and use it, and me because it means I actually got some people who don’t agree with me to read this.

Java is boring, and safe, and commodity. And that is exactly what businesses love. You want something that is reliable, has been proven, does the job. Moreover, you want bucket loads of libraries that other people have built and tested that can do the things you want to do. Whilst I built an implementation of TFA that was compatible with Google’s TFA Authenticator app in ABAP, it was a pain in the arse, and hasn’t been updated since I wrote it and then worried about releasing it as open source because you weren’t allowed to do that with ABAP. There’s a standard lib for Java. Standard boring languages are the bedrock of good enterprise builds. I do like to play with server side js, (aka Node) but i’m still a sucker for strongly typed languages.

But if you don’t like Java, then awesome, choose something else. Indeed it should not matter what you choose, because any PaaS you build on should be language agnostic when it comes to providing services to you to consume. If you’re not consuming any services from your PaaS then you missed the memo about cloud development, please go back to your application server. A PaaS offers micro-services that should be able to be consumed by any application running on that platform. This inherently makes those services consumable in a fashion that is hard to use for ABAP and pretty standard for every other language. I’m sure that SAP could wrap their services into a consumable layer that would be easier to use in the Cloud based ABAP. But this then means we start losing one of the best bits of the PaaS, that it shouldn’t favour any runtime. We’ll see how this story plays out…

Which kinda segues into my next worry/rant/observation. How did we get here that a language that really isn’t suited to cloud extension ends up as an officially supported run time in SAP’s CF PaaS? This goes back to my original tweet.

I believe that it is clearly SAP’s strategy to move to the largest part of their revenue coming from public cloud based SaaS solutions (including ERP). Btw, I think this is a sound strategic vision, because if they don’t pivot to get there, someone else will take that space. The on-prem model will not make as much money in the future, todays small companies are tomorrows giants, and with SaaS solutions they don’t need to migrate/upscale, they will keep the solution they buy today. SAP needs to be in that space, and they need credibility that comes from large customers being there too.

To this end I envisage SAP have been discussing moving some very influential customers to the public cloud. Those customer, I would guess, have responded that they don’t want to loose their current people or custom build investments.

The obvious solution from SAP is to put together an ABAP cloud runtime. I cannot be cheap to do this though. The effort to make ABAP into a secure and lightweight containerizable solution will not be something that a team will do in a week or two. There must be some sound and solid business reasons to do this. For all the reasons I have previously mentioned I believe that if companies want to extend SAP SaaS solutions, they should think about using other languages, not ABAP. But I fear this is not about making a better solution, it is about making a marketable one. If customers believe that they can extend the value of their existing investments and also benefit from moving to SaaS based solution, that is a great sales pitch. It’s having your cake and eating it.

This vision (even if it doesn’t work out to be the reality) of a simple gateway to moving to SaaS ERP is what I believe we are now being sold. This isn’t a story for developers, this is a story for the high level execs that sign the S/4HANA subscriptions.

I hope that a cloud based ABAP will be the gateway that enables some organisations to get off the on-premise mode and head to the cloud. What I fully expect is that once they are there, they will realise that there are better and more supportable ways to extend. That would be great. In the meantime I fear that we start bringing non-cloudy ways of working into the cloud landscape, this will likely cause failed/cost overrun projects. We run the risk of preferring Cloud ABAP as a way to interact with S/4HANA cloud, that would be disastrous.

It has been suggested that Cloud ABAP will potentially be the solution that encourages adoption of the SAP Cloud Platform. I just hope it isn’t the solution that kills it. I would much rather the money being spent of putting ABAP into the cloud is used to handle some of the other issues I see with SAP CP, but clearly there is a view that it will be a return on investment.

Then again, if you’re not trying new stuff and making mistakes, you’re not learning. If you’re not learning, you’re falling behind. So here’s to making mistakes and learning! To steal the excellent closing lines from James’ post:

So buckle up because there’s no turning back at this point. It’s either evolve or die.

I look forward to a lively debate on this topic.

(James Wood – https://blogs.sap.com/2017/10/04/abap-in-the-cloud-is-this-a-good-thing/)

James, I couldn’t say it better mate. Although I would refer to the platform as SAP CP 😉

I think SAP Cloud Platform is and will be a key part of the story of SAP’s  and customers’ evolution to the cloud. If it takes putting an “runs ABAP” badge on it, to get people to see how useful it is, I’ll deal with it. But for sure, it would not be my recommendation to any organisation that it would be best practice. I’ll keep an open mind, perhaps it will be one day, if so I’ll adapt and evolve – because that’s what you should do.

As always, my own thoughts, not my company’s,  please feel free to jump onto SCN and reply to James’ post. I’ll probably read those comments as well as whatever gets posted on twitter.

 

Further update on SAP Gateway CSRF token farce

So an update on recent rant about CSRF protection that isn’t needed on SAP Gateway.

The folks in the very attentive HCI team have just added functionality into their solution, so if you configure an OData call to an onPrem system via SAP HANA Cloud Connector, it will automatically do the GET with a fetch for the CSRF token for you whenever you configure a data update operation.

That’s kinda cool, but all it does is sweep the offending rubbish under the rug.

https://www.flickr.com/photos/bruce_krasting/7695348682 - Sweep under the rug, credit Bruce Krasting

https://www.flickr.com/photos/bruce_krasting/7695348682 – Sweep under the rug, credit Bruce Krasting

So now we have logic built into an integration platform that is needlessly slowing our integration flow because of a superfluous system requirement. An extra round trip for no reason.

In this case it is truly superfluous, because the original PUT that I was using had the user credentials as part of the header. That alone should make the CSRF token not required.

What this does show, is how SAP Cloud solutions like SAP HCI are able to update and fix stuff far faster than their onPrem partners. Even if it is a work-around to a problem that shouldn’t exist.

Herding cats, or managing GitHub issues – Waffle and HuBoard considered

Today I needed to decide on a tool to use to manage GitHub issues. I’ve got so many of these now-a-days that it has become quite hard to decide which one to work on and also to communicate to others which ones I am working on.

So I turned to some of the simple Kanban board visualisations of GitHub issue tools that I’ve seen. There may well have been others (I’m pretty sure that it’s possible to get Trello to work with GitHub) but I wanted something that was simple.

I ended up comparing waffle.io and huboard.com .

HuBoard

I found that HuBoard had in many areas some cool functionality that could well be something I wanted. In particular it has the ability to mark a task/issue as “ready for next stage” and “blocked”. Blocked issues are particularly important to me – so having this clearly visible is important. Additionally HuBoard claims to have existing integration into Slack – that would be pretty cool, but given I already have GitHub integration into Slack, I’m not sure it’s needed. Would have been nice if the web-site had shown what that integration actually was, as that is something that could really decide me one way or other.

HuBoard has a nicely minimalistic view – more inline with newer design patterns like Android material design, SAP Fiori. Labels on issues are small colour coded lines that appear when you hover over them.

huboard design

It’s quite neat and tidy. It also has a cool “fade away” filter option that just fades out un-selected items rather than removing them (two clicks removes them). However, clicking the same button multiple times to get different affects, I’m not sure that’s really a great idea. I’ve definitely been slapped over the wrist for bad (and not very accessible) UX when I’ve done similar things in the past. But technically and from a usefulness stake (if you understand what you’re doing) that’s a pretty cool feature.

However, I there were some concerns – when I loaded the HuBoard site on my phone it was good to see that it adapted responsively to the space available and listed the items rather than displaying a grid (well, I’m still debating if that was good, but at least it was responsive.) However, when I then clicked on a issue:

mobile huboard issue small

Yuck! that’s not usable.

Edit: NB see note following stuff documented in following section about privacy has been changed.

I then looked at the site to understand what the privacy policy was:

privacy policy from HuBoard missing

The only info on the site was “This Application collects some Personal Data from its Users”.

I’m pretty sure this is because of HuBoard not paying someone for their generated policy:

huboard policy issue

 

However, I tweeted at HuBoard:

And as at time of writing this post, haven’t had a reply. To me, if I’m going to trust a cloud service, I need to be able to understand what it will and won’t do with my data. A non-working privacy policy page on the main site is a BIG #fail. Then not to respond to someone @ mentioning your twitter handle is a mark of the kind of service that I might expect if I was a customer. Not great.

Edit: so none other than the founder of HuBoard reached out to me. Privacy policy is fixed. It looks pretty good too, most of it is in plain English not legalese. Guess timezones for USA meant they were sleeping. The founder reaching out, that’s pretty good customer service. These guys will hopefully do some great things!

 

Waffle

I looked at Waffle.io. Now bizarrely the thing that most scares me about that product is its price – $0. I’ve learnt, if I’m not paying $ for something, then I am the product. I’m not sure if Waffle.io is still in beta/investor funding and is happy running without making any money but perhaps just piling up the company valuation? This whole SaaS valuation model sometime confuses the crap out of me. When you consider that companies the size Workday have profit margins of -24% (I mean WTF ?) It’s quite conceivable that charging money right now doesn’t boost the value of the company as much as having more subscribed users. Still paying nothing for something just makes me want to look for the catch. But I couldn’t really find a catch (I imagine it won’t be free forever and that payment will be required soon, but if it’s in same sort of price point as HuBoard then this shouldn’t be an issue $24 a month to be productive is not that bad!)

Waffle has some feature that I thought were pretty good, but specifically I liked the “size” attribute for an issue. By using this I can ensure that for each stage of the Kanban there aren’t too many issues being dealt with. So it can be fine to have quite a few small issues, but having the same number of large issues could cause a problem.

waffle

 

at the top of each column was a counter showing the number of issues and the total size. Next to my lovely picture was a number showing what I thought the size of this issue was.

This functionality I like, it will help manage all the issues and ensure we’re not going crazy pushing so much to into testing without actually testing it.

It was also nice in Waffle to be able to see the number of comments an issue has – it’s often worth drilling into those issue that have a lot of comments, even if it’s not yet my issue.

However, compared to HuBoard the amount of information shown can result in a quite busy screen – for example from Waffle’s own GitHub repo…

waffle2

 

Personally, I didn’t mind the “noise” but others I spoke to thought perhaps the minimal style of HuBoard better. Since I often have so many labels that colour alone is going to be an issue, I think I prefer this layout.

In contrast to HuBoard the mobile interface is not at all responsive, you see the same site but just zoomed out so you can’t read anything. That said, pinch zooming and scrolling around on the phone isn’t hard, and it does give you a better perspective of how the lanes compare. I’m pretty sure that there is probably a better responsive layout that could be adopted. But compared to the rendering mess that happened in HuBoard when accessing from mobile, it was much easier to use the Waffle site.

Conclusion

You can probably see where I’m heading! I decided to go with Waffle for the moment, but I’ll keep an eye out for HuBoard. As with all these SaaS apps, iteration is the name of the game, and I’m sure that feature parity won’t be far off. Neither tool has an Android mobile app, but neither tool is very usable on a phone – so perhaps when one of them makes that leap it will differentiate itself. We shall see.

After I’ve been using Waffle for a while, I’ll perhaps write another post about “real life” experience.

Cheers!

 

 

Security in depth – or a bug waiting to happen? – CSRF protection on SAP Gateway

What's that - It's the dragon that guards the locked door, we feed people who ask silly security questions to it

What’s that? – It’s the dragon that guards the locked door, we feed people who ask silly security questions to it.

<rant>

So I’ve got my knickers in a twist again. Recently I was playing around with sending some OData to my SAP server when it refused me. Now, I didn’t like that, but at least it was kind enough to tell me why. Apparently I hadn’t fed it a CSRF token. OK, so I looked in the headers of the GET that did work, and lo and behold there was a CSRF token there. I fed that into the POST I was doing, and bingo it worked.

Now it seems to me that many many people have hit the same thing and found the same solution. Indeed, I asked around some people I knew and they told me: “Get over it Chris, it’s in the header of your GET, it lasts all session, just use it!” But me being me, no, I wouldn’t accept that!

Slight aside – they also mentioned “Damnit, I remember when that patch came in, it buggered up my custom Gateway app and I had no warning that it was coming, took me ages to figure out why it wasn’t working.”

 

So I thought – OK? Why? Why do we have CSRF protection in the first place, what on earth is it?

CSRF protection – Cross Site Request Forgery protection, according to the websites I read is supposed to protect against the case where unknown to a user a cookie in the browser used for authentication allows a malicious site to alter data on your system. (And in the case of gateway, your SAP system).

So to send a PUT or POST or DELETE (the verbs that can change data) from a browser without user knowing is going to involve 1 of 2 things.

a) An injection of HTML on the page adds either a form that is going to POST some data (typical type of attack  CSRF protects against) or a link e.g. img tag which GETs data.

b) An injection of some script, e.g. JS on page that is going to do the PUT/POST/DELETE

In the case of (a – POST) the payload will be malformed and Gateway isn’t going to accept that as valid OData – so no security worries anyway. And for (a – GET) CSRF protection isn’t even applied.

In the case of (b) well if I can embed JS, I can just as easily embed a GET pull the header and then do an update with the CSRF token. Indeed the sites that advocate for the CSRF token approach make it clear that it cannot protect you in the case you have malicious Javascript.

In the case that the script is running on a page from a different domain, then CORS will kick in and stop the access – but if somehow the injection is on my own domain, I don’t see how we’re protected.

So I was at a loss. What protection does CSRF actually offer Gateway?

I further researched:

There’s a great explanation, which does better than I have at:

Play Framework

It is recommended that you familiarise yourself with CSRF, what the attack vectors are, and what the attack vectors are not. We recommend starting withthis information from OWASP.

Simply put, an attacker can coerce a victims browser to make the following types of requests:

  • All GET requests
  • POST requests with bodies of type application/x-www-form-urlencoded,multipart/form-data and text/plain

An attacker can not:

  • Coerce the browser to use other request methods such as PUT and DELETE
  • Coerce the browser to post other content types, such asapplication/json
  • Coerce the browser to send new cookies, other than those that the server has already set
  • Coerce the browser to set arbitrary headers, other than the normal headers the browser adds to requests

Since GET requests are not meant to be mutative, there is no danger to an application that follows this best practice. So the only requests that need CSRF protection arePOST requests with the above mentioned content types.

Since Gateway does not support POST requests with bodies of type application/x-www-form-urlencoded,multipart/form-data and text/plain (or if it does there’s your problem right there!) there is no need for CSRF protection.

I then had a fun conversation on Twitter with Ethan

The great thing about chatting with Ethan is you always come out having learnt something.

He makes a good point, and I’ll paraphrase him:

“The best security is deep and many layered and protects not only against the things that you know may happen, but also against those that you’re pretty sure won’t.”

I was wrong –  “to send a PUT or POST or DELETE (the verbs that can change data) from a browser without user knowing is going to involve 1 of 2 3 things. With the third being:

An exploitation of a hitherto unknown browser bug that allows it.

So now I’m confused. Is it worthwhile implementing the hassle that is CSRF protection, including the potential slowdown in speed of response from the solution (a paramount concern in a mobile app) for a situation that might happen.

When I’m writing ABAP code, I’m happy to trade away performance of the code for ease of maintenance. I don’t use pointers (field symbols) to loop over data that I do not intend to change, because some fool could come along later and accidentally do just that. If I instead use a work area, there isn’t that risk.

So in some respects I already do work that makes the solution slower to ensure lower risk, so shouldn’t I just do the CSRF thingy?

However, it is the reason for the risk – I don’t trust that the people maintaining the code after I leave will understand what I have done in my implementation of CSRF protection and won’t make a mistake. Even if I’m using UI5 in my application to update my SAP system, will they remember to call the refreshSecurityToken method every time before a PUT, POST or DELETE? Will they test it? Will they let the session expire in the testing so that they actually need to call the refreshSecurityToken method? I really hope so, but I doubt it. I see applications going into error and data not being updated when it should have been, because of “needless” CSRF protection.

weighing Dodgy Code vs Browser Bug risks

weighing Dodgy Code vs Browser Bug risks

So what I see is this: Security in enterprise is paramount, Gateway is enterprise software, it needs to be secure. So SAP made it so, even if it hasn’t really made a big difference or fixed any known security holes. But, “just in case”. However, custom code (and even standard code 😉 ) will have bugs, ones that rely on sessions timing out are particularly hard to test and will get through. The risk to your Gateway based mobile app is greater by having CSRF protection enabled than it is to your data being maliciously hacked through zero-day exploits. But I guess it depends on what that data is 🙂 .

</rant>

OK, one final bit…

<rant>

Given that I might not actually be using my Gateway for a UI app but for machine to machine transactions, would it PLEASE be possible that if I provide a valid authentication header in the PUT/POST/DELETE that we ignore the CSRF thingy? If I can somehow come up with a valid auth header, then we aren’t protecting anything with a CSRF token, we’re just making transactions slower by requiring multiple round trips that shouldn’t be needed.

</rant>

I feel better now. 🙂

 

Read how this discussion unfolds over at SCN…

http://scn.sap.com/community/gateway/blog/2014/08/26/gateway-protection-against-cross-site-request-forgery-attacks#comment-611490

P.S. my last post from SCN comment thread as I think it’s an important summary:

The thing is, by not implementing CSRF protection, we aren’t making our services insecure. There are no known ways to use CSRF against Gateway currently.

There is the case of protection against unknown attacks, but is that worth the cost, risk, effort?

Not using CSRF protection does not mean you are making your service insecure. It just trading “just in case” against real life complexity, risk and cost.

Depending on the data concerned, that “just in case” might be worth it. It won’t always be.

Architects have a responsibility to their companies to balance these risks and decide. We have the responsibility to inform them clearly and not just pretend that security is the only and overwhelming factor to consider.

Sometimes we put security on a pedestal and everything has to be done to address it. But we should remember that everything should have a risk/reward curve and sometimes NOT coding for a security risk is actually less risk than coding for it.

 

 

Multitenant Spring Data JPA with EclipseLink on SAP HANA Cloud Platform

Just a quick post here today, and hopefully I flesh out a more detailed post on SCN later:

OEM model

There are probably two ways to make money developing apps on the HANA Cloud Platform:

1) be incredibly good at it, such that you can build truly awesome stuff that customers aren’t going to care about platform costs and still pay you bucket-loads

2) use the OEM model and build very efficient apps that solve a little problem for lots of people. Keep costs low, and sell to lots and lots of people.

3) be big consult, wine & dine the people with the money, put loadsa people on simple projects bill lots.

I’m aiming for a mix of 1 and 2 to just get into the sweet spot, of course that’s hard work. But this evening I made a step in the right direction by enabling multi-tenant access to one of my apps that auto-magically put the tenant key in all DB accesses in my app – without me having to do any work to specifically write that into the queries.

for reference the magic happens with a custom implementation/extension of JpaRepositoryFactory, JpaRepositoryFactoryBean and SimpleJpaRepository and one line in my Spring xml:

<jpa:repositories base-package=”com.wombling.blah.blah.dao ”
factory-class=”com.wombling.blah.blah.multitenancy.MultiTenantJpaRepositoryFactoryBean” />

my custom tenancy resolver:

public class CurrentTenantResolverImpl implements CurrentTenantResolver<String> {

	@Override
	public String getCurrentTenantId() {

		InitialContext ctx;
		try {
			ctx = new InitialContext();

			Context envCtx = (Context) ctx.lookup("java:comp/env");
			TenantContext tenantContext = (TenantContext) envCtx.lookup("TenantContext");

			return tenantContext.getTenantId();
		} catch (NamingException e) {
			return "NOT_CURRENTLY_RUNNING_MULTI_-_TENANT"; // 36 chars as per real tenant id
		}
	}

}

is called each time a DB query is made (which is already pretty invisible due to “magic” of Spring data and JPA.)

I have to give a huge shout out to   from Slovakia who posted up most of the code I reused. Thanks dude!

http://codecrafters.blogspot.sk/2013/03/multi-tenant-cloud-applications-with.html

I’ll get back to you with some more HR type stuff later.

 

iOS – not quite the Enterprise developers’ best choice

I was recently at a conference where there was a big cheer from the audience when the vendor announced that they were moving away from a model of updating all the customer’s development/test and production systems 4 times a year, to a model of still doing quarterly updates, but updating the dev/test environment one month earlier. The crowd cheered. Wow! (The conference was SuccessConnect, the product SuccessFactors and the vendor SAP, but that is irrelevant for this particular post).

So why on earth would people want to get their software updated later? And why would such a change cause them to cheer? It’s quite simple really! Risk reduction.

risk reducer

 

Reducing Risk

To quote Donald Rumsfeld “there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know”. Whilst it’s blinking difficult to deal with the third category of “unknown, unknowns”, it’s much easier to deal with the known unknowns. In our case of software upgrades, we know that there are likely to be changes, but we aren’t sure exactly what. So having a early release in a system that isn’t critical for the running of our business, means we have the possibility to make those known unknowns into known knowns and deal with them. (hmm perhaps I shouldn’t have quoted Donald, this is getting a bit confusing! – but hopefully you get the point!)

Basically what was being offered was a risk mitigation strategy, and the enterprise just LOOOOOOVES that.

Back to Mobile apps

OK, so what’s this got to do with mobile applications? We’ll basically for mobile applications in the enterprise space we should be able to offer the same thing. I’m not talking here about applications that companies develop internally and deploy to all their staff, but applications found in application marketplaces (app stores if you will) that are used by enterprises.

Say there is this great app out there that allows you to track the driving speed and location of all your delivery truck drivers. It’s great because the same functionality just a few years ago cost thousands of dollars per truck and had to be downloaded manually each night. Now, you have it instantly and at a subscription cost of $20 per user per year, with awesome real-time reporting and everything! Great! But the reason it is so cheap is the developer is selling this software in a SaaS model. They have a multi-tenant architecture and when they make an update they update all customers at once. Now what happens if they push out an “improvement” in the user interface of the solution? Well 90% of your users will probably adjust, but 10% (or more) are suddenly going to be referring back to that print-out of the training material that you sent them getting very confused, phoning the help desk and generally finding an excuse for not doing work. Bad!

But had you known that a change was coming, what could you have done? Well, you could have updated the training material, sent comms explaining the wonderful new feature, etc.

So how could you have know this change was coming? Well the company that you’re subscribing to could have sent you some details about the change. But what if they thought that the change was so insignificant it didn’t need any comms? And  what if it’s just the particular way that your workforce use the app that means that it might need explaining? Then you are going to need another plan.

Another plan

chrome beta

In the Android application marketplace (Google Play) there is allowed the concept of a Beta version of application. Many popular applications (for example Chrome and Firefox) have beta versions of their software that showcase and test out new versions of UI and functionality.

This allows companies to test out new versions of software before their major install base start using it. And because the software is flagged BETA people know that there might be things different. It is a risk reduction strategy for not only the consumers of the software but also the developers. Win-Win!

Apple however in their app store submission guidelines: https://developer.apple.com/app-store/review/guidelines/#functionality

2.9  Apps that are “demo”, “trial”, or “test” versions will be rejected. Beta Apps may only be submitted through TestFlight and must follow the TestFlight guidelines

and from the TestFlight guidelines: https://developer.apple.com/app-store/Testflight/

External Testers (Coming Soon)

Once you’re ready, you can invite up to 1,000 users who are not part of your development organization to beta test an app that you intend for public release on the App Store.

So instead of making a publicly available Beta version they are going to restrict to a maximum of 1000 Beta testers. Not so good for that SaaS developer with over 1000 customers is it? And even then – that functionality isn’t even release to the market yet! The whole TestFlight thing was only announced a few days ago!

In summary

So back to my title – if you’re a SaaS developer and you want to help your customers by reducing your risk and theirs in the mobile application deployment space, build for Android, not Apple. If you care about enterprise, then be aware that risk mitigation is a big thing. The reality is that as an enterprise developer we need to deliver for both Android and Apple devices, but one of them is clearly more enterprise friendly in one particular respect.

Would be great for Apple to take this on-board and offer a unrestricted “TestFlight” program for enterprise software developers… We can cross our fingers and hope!

Continuous Integration vs Phased Deployment in a SaaS world

I was very interested to read some links that Naomi Bloom posted about how Workday have moved to a continuous integration deployment model rather than a phased release.

As  developer, I love the idea of continuous integration, having a set of tests that can automatically check whether the code I have built will cause an issue in production and then allow me to move it up to prod immediately. It fits with TDD and all the other cool things I want to do. Awesome!

If I were writing code in the internal development teams of Workday or SuccessFactors, I’d want the software to be CI.

However! As a developer of extensions to one of those platforms, I couldn’t think of a worse option! If you look at the “disadvantages” section in the linked Wikipedia article on CI, you’ll notice that one very important thing is to have lots of good automatic test scripts. The problem is, a vendor can only possibly run their own test scripts, they can’t run mine. (Perhaps they could run mine if such an API was built, but could they justify not deploying to prod because a little used partner extension failed a script?) So what if some change that the vendor does breaks a behaviour in my code? Well, that’s bad for me. I’d better hurry up and fix it, because all my customers are now with broken code, and the first I found out about it – when it broke. And likely I’m not going to find out until I have one of my customers complain – unless I have proactively set my test scripts to run every hour and send me a message when something breaks, in which case I’d better be ready to do emergency support 24/7. Yeah, just what I want. NOT!

This would be a huge burden on a extension provider, you wouldn’t have a stable platform to build on.

With SuccessFactors being on a phased release rather than continuously integrated to production, it is much easier for me to join in with the testing of my solution before it hits the market. I know that my customers aren’t going to get a nasty shock because something suddenly breaks/changes behaviour, because I have a window to test that before it impacts them. I also know when that window is going to be, so I can plan around it and allocate my resources. Whilst the solution might be wonderfully cloudy and elastic, my skilled pool of extension developers is definitely less cloudy and more finite and fixed.

Now it might be possible to allow partners to have an early access box, and perhaps delay CI deploys to production by a week or so to allow partners to test their code. But that is one hell of an effort that you’re demanding of your partners to do that. And as one of those potential partners, I can say I’d be thinking very long and hard about the risk you as the vendor are putting me at, and probably would decide not to go there.

I think, that in a world where purchasing 3rd party add-ons for your cloud platform will become the norm (allow me my dreams please). And where the power of the platform is driven by these add-ons/apps, having a phased release makes sense. How cool would an iPhone be without any apps from the AppStore, how good would an S5 be without apps from Google Play? They are both great devices, but they are awesome when enhanced by external developer partners. These mobile solutions have phased releases. It’s not because they couldn’t have constant updates, the tech is easily there for that to happen, but because in order to sustain the applications/application developers that make them so cool they need to provide a stable platform.

I’m really glad that SuccessFactors provides a stable environment for me to build on, as I am convinced that HCM SaaS has a huge potential to be enhanced and extended to the better use and consumption of businesses. It’s a real strength of the solution, and I am very happy to be play a part this story, and that SAP and SuccessFactors are carefully considering the needs of the development partner in this scenario.

All that said, it would be cool to be developing in a continuous integration solution, but just not for the partners building on your solution.

Scaling or cropping profile images into circles when the source isn’t a square

WARNING CODE AHEAD

<geek>

It took me probably too long to figure out how to do this so I thought I’d share.

circles

To do this in a way that most browsers support wasn’t so obvious (to me).

in the end I did it by (approximately):

HTML

<div class="profile-image" style="background-image:url('profile-img1.jpg')">
 <img src="profile-img1.jpg">
</div>

CSS:

div.profile-image {
 width: 47px;
 height: 47px;
 background-repeat: no-repeat;
 background-position: center center;
 background-size: cover;
 overflow: hidden;
 border-radius: 23.5px;
 -webkit-border-radius: 23.5px;
 -moz-border-radius: 23.5px;
 box-shadow: 0 0 4px rgba(0, 0, 0, .8);
 -webkit-box-shadow: 0 0 4px rgba(0, 0, 0, .8);
 -moz-box-shadow: 0 0 4px rgba(0, 0, 0, .8);
 border-radius: 23.5px;
}
div.profile-image img {
min-height: 100%;
min-width: 100%;
 /* IE 8 */
-ms-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=0)";
 /* IE 5-7 */
 filter: alpha(opacity = 0);
 /* modern browsers */
 opacity: 0;
}

The image tag is in there so it’s still possible for the user to interact with the image, i.e. save it if they want, but it is made see through.

So the user “sees” the background image which is positioned such that it covers the div, so all of circle will have content, and the middle bit of image will be shown. The circle is made by making the border radius half the width of the div.

The important bits were the “background-size: cover;” and the “background-position:center center;”

Obvious when you know how.

</geek> (as if!)

 

credits to : http://stackoverflow.com/questions/11552380/how-to-automatically-crop-and-center-an-image for the inspiration!