Blog

OIFF (Order it for future) principle
 

In programming, we have many different principles, some of them include: SOLID, DRY, YAGNI, KISS. Might blog about these in the future, but right now, I am going to introduce a principle called OIFF principle.

What is the principle about?

A way to order your code where future potential expensive bugs turn into less costly bugs.  

The above definition might not make much sense, but let’s continue by dissecting different examples – it will become clearer.

This principle can be seen as a subset of defensive programming approach. Here, we are concerned more about the order we perform things, and how that order protects us from the future.

Case 1 – HTML filtering

What is the desired result? We want to filter out all the dangerous HTML tags.
The way many beginners would start writing code to solve this problem would be to define black list of HTML tags, and filter
them out, based on that list.

What is wrong here? They are violating the OIFF principle, they are not being defensive. It’s highly likely that the black list of html tags
will miss some of the dangerous tags.

Fix: create a white list of html tags, and only allow HTML tags that are “allowed”, reject all other ones.

Side node: In fact, Microsoft has a method to render encoded html on website. The way it works is to encode all known dangerous unicode characters.
However, Microsoft has released AntiXss library, which offers the same functionality, but there is slight difference. AntiXss does not know which characters are ‘dangerous’. Rather, it defines dangerous character to be as ‘not a trusted character’, thus the algorithm would be:

1) If character is not in the “trusted characters” list, encode it.
2) Otherwise, don’t do anything.

The distinction is small one, but there is a reason why the new library does it this way.

It has to do with how future proof & safe it is against the future. This is what the principle is all about. We want to think in terms of present & future.

Say, in the future, HTML committee decides that they want to add new potentially dangerous HTML tag.

If we are using black-list HTML tag stripping approach, we will have to update our black list. All the existing systems will need updating.
If we are using white-list HTML tag stripping approach, we STILL need updating, but it can’t do any serious damage if we haven’t updated it yet.

Humans often forget, they make mistakes. 

Case 2 – AntiforgeryToken

This, I feel is also bad violation of this principle. If you have used MVC, you should be familiar with [AntiforgeryToken] attribute.
The idea here is that you need to decorate your actions with an AntiForgeryToken attribute and the posted data should have forgery token, which then will be validated. This is to avoid XSRF attacks.

What is the problem, and how is the principle violated here? To understand, let’s start asking series of questions, and see where we arrive.

What is the result we want to achieve? I want to make sure that my site does not have XSRF attacks.
How could we achieve it? We could either:

1) Globally enable AntiForgeryToken attribute for every controller, and exclude only the ones we don’t want to be validated.
2) Individually decorate each action with AntiForgeryToken attribute which we do want to be validated.

Which one is more defensive in your opinion? I would say first one is more defensive.

Why?

Humans often forget.

What if the senior developer forgets to add the attribute to controller? In first option, it would be added automatically.
What if junior developer never knew about this feature / he had to do this? The fact that he didn’t add AntiforgeryToken to his View,
would make his action fail, thus he learns what has to be done, and why.

Security is an area which can’t be taken light-hearted. XSRF attacks can be very powerful, varying from posting an image to Facebook,
and ending up with account hijacking. A framework should not nag all the time, but it has to be in important scenarios.

Apparently, MVC has been sticking with option (2), and this is one of reasons why approximately 65% of MVC websites are prone to XSRF attacks. If they’re going to change the approach, it might break a lot of code, it’s never too late for security.

The decision could force every user to understand the implications of XSRF attacks & how to deal with them.

Case 3 – it’s not all about security.

I don’t want to leave anyone under the impression that this principle is related to security only. The reason I gave examples of security, is:

1) It’s the most important use of this principle.
2) I was working with system that was violating this principle.

Long story short, RabbitMQ is violating OIFF principle. Let me explain to some who are unfamiliar with RabbitMQ:

You can create exchanges (places where you can push the messages), and you can also create queues (places where you can receive messages). These two components are tied together using bindings. There are different binding configurations that allow to do cool things.

It’s easy to get into situation where you have exchange that can’t route a message into queue, as the binding configuration is configured incorrectly. If this happens, message will be dropped silently.

I’ve had scenarios where routing configurations between exchange <–> queue broke, as someone decided to change letter “A” into “a”.  Nobody knew that RabbitMQ is case-sensitive. Did anyone notice that RabbitMQ was dropping messages? No.

What would be the correct approach here?

RabbitMQ should have designed the system in a way that if message can’t be routed to queue, it will be placed into a default alternate exchange, which will be monitored, either through e-mail alerts, or some kind of other logic system.

This MIGHT be undesirable functionality for some people, but that’s why you **COULD** configure RabbitMQ exchange not participate in such logic. They have done it the other way.

 

Reply

Your email addres will not be published. All fields are required.