The Software Simpleton

Nomadic cattle rustler and inventor of the electric lasso

Clojurescript - Using Transducers to Transform Native Javascript Arrays

I’ve been digging into clojurescript more and more but as I am still quite new to cljs, I find myself over zealously calling the clj->js and js->clj interop functions that transform javascript arrays to clojurescript vectors, maps, lists etc. and vice versa. I found this frustrating as I want to use the power of the clojurescript language and not have to drill down to javascript unless it absolutely necessary.

I’ve been writing a react wrapper component which has pretty much dictated that I need to be dealing with native javascript objects at all times and as such I am having to call these interop functions. An example of this is in the code below:

(let [prevKeys (.keys js/Object (or prevChildMapping (js-obj)))
      nextKeys (.keys js/Object (or nextChildMapping (js-obj)))
      keysToEnter (clj->js (filter #(not (.hasOwnProperty prevChildMapping %)) nextKeys))
      keysToLeave (clj->js (filter #(not (.hasOwnProperty nextChildMapping %)) prevKeys))]

      (set! (.-keysToEnter this) keysToEnter)
      (set! (.-keysToLeave this) keysToLeave)))))

On lines 3 and 4, I am calling clj->js to transform a clojurescript PersistentVector into the javascript native array equivalent. What I really wanted was to call the clojurescript sequence functions map, reduce, filter etc. on native javascript objects. I asked if this was possible in the clojurescript irc and transducrs were put forward as a means of achieving the goal.


I had heard of transducers in the clojure world without taking the trouble to see what all the fuss was about but I had no idea that they were available in clojurescript. I’m now going to give a brief introduction as to what transducers are but there is lots of good material out there that probably do a better job and Rich Hickey’s strangeloop introduction to them is a great start.

I always address a new concept by first of all determining what problem does the new concept solve and with tranducers the problem is one of decoupling. You are probably familiar with filter which returns all items in a collection that are true in terms of a predicate function:

(filter odd? (range 0 10)) ;=> (1 3 5 7 9)

It should be noted that filter could be constructed using reduce.

(defn filter-odd
  [result input]
  (if (odd? input)
    (conj result input)

(reduce filter-odd [] (range 0 10))

The problem with the above is that we cannot replace conj on line 4 with another builder function like+. This problem holds true for all the pre-transducer sequence functions like map, filter etc. Transducers set out to abstract away operations like conj so that the creation of the resultant datastructure is decoupled from the map/filter logic.

conj and + are reducing functions in that they take a result and an input and return a new result. We could refactor our filter-odd function to a more generic filtering function that allows us to supply different predicates and reducing funtions by using higher order functions:

(defn filtering
  (fn [reducing]
    (fn [result input]
      (if (predicate input)
        (reducing result input)

(reduce ((filtering odd?) conj) [] (range 0 10)) ;=>[1 3 5 7 9]
(reduce ((filtering even?) +) 0 (range 0 10)) ; => 20

The above is not as scary as it looks and you can see on lines 9 and 10 that we are able to supply different reducing functions (conj and +). This is the problem that transducers set out to solve, the reducing function is now abstracted away so that the creation of the datastructure is decoupled from the sequence function (filter, map etc.) logic.

As of clojure 1.7.0 most of the core sequence functions (map, filter etc.) are gaining a new 1 argument arity that will return a transducer that, for example this call will return a transducer from filter:

(filter odd?)

One of the new ways (but not the only way) to apply transducers is with the transduce function. The transduce function takes the following form:

transduce(xform, f, init, coll)

The above states that transduce will reduce a collection coll with the inital value init, applying a transformation xform to each value and applying the reducing function f.

We can now apply this to our previous example

(def xform
  (filter odd?))

(transduce xform + 0 (range 0 10)) ;=> 35
(transduce xform conj [] (range 0 10)) ;=>  ;=>[1 3 5 7 9]

I hope it is obvious that (range 0 10) is coll and [] is the init, xform is the transducer function and + or conj are the reducing functions.

Meanwhile Back in Javascript land……

If we now shift back to our specific example, we can use a transducer to transform a native javascript array because a transducer is fully decoupled from input and output sources.

This is the current code that we want to refactor:

(clj->js (filter #(not (.hasOwnProperty prevChildMapping %)) nextKeys))

So the first question is what would the reducing function be when dealing with native arrays? The answer is the native array push push method which adds a new item to an array. My first ill thought out attempt at the above looked something like this:

(transduce (filter #(not (.hasOwnProperty prevChildMapping %))) (.-push #js[]) #js [] nextKeys)

This is completely wrong because I had not grasped what is required of the reducing funcion. A reducing function takes a result and an input and returns a new result e.g.

(conj [1 2 3] 4) ;=> [1 2 3 4]
(+ 10 1) ;=> 11

The push function does not satisfy what is required as the push function actually returns the length of the array which is not what is expected. What was needed was someway of turning the push function into a function that behaved in a way that the transducer expected. The push function would need to return the result:

(fn [arr x] (.push arr x) arr)

But as it turns out, this also does not work because a reducing function to transduce has to have 0, 1 and 2 arities and our reducing function only has 1.

As it turns out, both clojure and clojurescript provide a function called completing that takes a function and returns a function that is suitable for transducing by wrapping the reducing funtion and adding an extra arity that simply calls the identity function behind the scenes. Below is the completing function from the clojure.core source.

(defn completing
  ([f] (completing f identity))
  ([f cf]
       ([] (f))
       ([x] (cf x))
       ([x y] (f x y)))))

My final code ended up looking like this:

(keysToEnter (transduce (filter #(not (.hasOwnProperty prevChildMapping %))) (completing (fn [arr x] (.push arr x) arr)) #js [] nextKeys)

The reducing function that uses the native javascript push function is wrapped in completing that makes it suitable for transducing.

I think I’ve ended up with more code than I started with and I also think that this is a poor example of transducers but I wanted to outline the mechanics involved in using transducers with native javascript arrays as I could find absolutely nothing on the google etc. so hopefully this will point somebody else in the right direction.

If I have got anyting wrong in this explanation then please leave a comment below.

Clojure - Installing a Local Jar With Leiningen

I ran into a situation earlier where I wanted to use the 0.8.0-beta2 version of om but it had not yet been pushed to clojars.

The answer was to install the jar in a directory local to the project I was working on.

Here are the steps I took to install the om jar locally:

Ember.js - Programmatically Bind From a Handlebars Helper

Warning: The examples use Ember 1.7.1

This is a quick update to my last post where I created a helper that bound data from an external json hash.

One problem I ran into was that if I was not creating links via the link-to helper then the properties were not bound. In line 11 of the gist below, I am returning a simple string that will render the unbound property wrapped in a span tag.

The solution was to call out to the handlebars bind helper after updating the options hash on lines 2-5 below:

Here is an updated jsbin with a full working example.

I am not sure if this is relevant for life after ember 1.7.1 but these techniques have worked well for me thus far.

Ember.js - Rendering Dynamic Content

Warning: The examples use Ember 1.7.1

I’m not going to go into great detail in this post as I think the code examples will be out of date post ember 1.7.1.

I recently had the problem of how to make a complex table reusable accross different datasets. As each dataset would contain objects with different fields, I would not be able to use the usual handlebars syntax of binding:


My solution was to create a json hash that is similar to the one in the gist below that specified which fields I was binding to and whether or not, I was going to render a simple text field, a link or for complex scenarios a component:

  • The address structure on line 9 is the most basic as it just specifies a property to bind to.
  • The structure on line 4 contains a route property to signify that I want a link generated.
  • The structure on line 13 contains a component property that unsurprisingly will render a component. A component can also take an array of bindings on line 15 that will have the effect of calling the component like below:
{{full-contact name=name address=address}}

Now in my template, I just iterate over this columns collection and either call a handlebars helper that renders the text or link (line 9) or I call out to a different helper that will render the component (line 7)

If I am rendering a simple text or link, then I call the helper that is outlined below:

If I am rendering a component then this helper is called:

Here is a working jsbin.

That is all I have to say on the matter but I would love to hear an alternative or better approach to the above.

Clojurescript - Om - Dynamic Components and Unique Keys

As I get older my tolerance for javascript seems to be getting worse so I’ve been employing clojurescript as a shield from the true horrors of javascript. I’ve been playing around wih om which acts as an interface to facebook’s react.

While developing with om, I kept getting the same javascript error after rendering a dynamic list like below:

If you have done any developing with om, then I am confident in saying that you will have come across this warning after rendering such a list:

Each child in an array should have a unique “key” prop. Check the renderComponent call using <tbody>. See for more information.

Here is the code I was using to render such a list:

Line 12 in the above is the villain of this piece as there is no way that I could find of passing a func that will set the key property that react uses to identify each dynamic child.

I have been trying to ignore this warning as it did not cause any code to stop executing but I kept seeing this question pop up on the irc channel and I could not find a good answer on the google.

It also turns out that not supplying a react key for each dynamic item could lead to some very unexpected behaviour.

After consulting the om docs I discovered that om/build can take a third :opts argument and one of the allowed keys is a :react-key which is one of the solutions to the problem.

Armed with this information, I refactored the above code to the following:

The only change is on line 12:

(map-indexed #(om/build clip-view %2 {:react-key %1}) clips))

I have used the simplest case of an index for the :react-key but if I was rendering from a list where each item had a unique identifier then I would use the :key option to specify an element property that would be bound as the react key and would look something like this:

(map #(om/build clip-view % {:key :id}) clips))

(thanks to Anna Pawlicka) for mentioning this in the comments below.

Or probably the best option of is to pass an :opts map containing a :key option to om/build-all:

(om/build-all clip-view clips {:key :id})

Thanks to Daniel Szmulewicz for mentioning this on twitter.

I can now paste a link to this post when somebody asks how to get rid of this warning on the clojurescript irc channel.

Clojure - Lazy Sequences for the Masses

I have recently fallen head over heals in love with clojure and I’ve decided to run through the problems in the excellent 4Clojure site to ensure that I have a good understanding of the language before I even think about dealing with IO or building something meaningful. I’m currently on question 60 which has introduced the concept of lazy sequences and is displayed below:

Write a function which behaves like reduce, but returns each intermediate value of the
reduction.  Your function must accept either two or three arguments, and the return
sequence must be lazy.

(= (take 5 (__ + (range))) [0 1 3 6 10])

What struck me as totally strange when I first read this problem is that an infinite list is being passed to the function in the form of (range). I had only previously seen range used with at least an end argument like below:

user=> (range 10)
(0 1 2 3 4 5 6 7 8 9)

But if no start or end arguments are supplied then the default for end is infinity.

;; default value of 'end' is infinity
user=> (range)
(0 1 2 3 4 5 6 7 8 9 10 ... 12770 12771 12772 12773 ... n

So why on earth would you want to supply an ever expanding range? The answer to this puzzle within a puzzle was to clarify what range actually returns which according to the docs is:

Returns a lazy seq of nums from start (inclusive) to end (exclusive), by step, where start defaults to 0, step to 1, and end to infinity.

The 4clojure problem at the start of this post also asks the user to return a lazy sequence for the solution. So what is a lazy sequence? As a newbie to clojure, I struggled to find good information of what an actual lazy sequence was and the docs left me with an unclear understanding of how I should use lazy-seq to construct a lazy sequence.

Lazy Sequences

Laziness in this context means that you can specify a computation that could theoretically take forever to complete, but you can evalutate as much or as little of it as you need. A very simple example of this is below:

user=> (take 10 (range))
(0 1 2 3 4 5 6 7 8 9)

In the above example, range is being called with no end argument which according to the docs will mean that end defaults to infinity but if we type the expression into the repl, a sequence with the first 10 elements is immediately evaluated so it would appear that take 10 is not waiting for range to reach infinity before grabbing the first 10 elements.

A closer look at what type of sequence is returned starts to give the game away:

user=> (class (take 10 (range)))

As range with no end argument defaults to infinity, the only way that this could possibly work is if range returns a lazy sequence or one that realizes its elements as requested. The original 4Clojure problem stated that the sequence returned from the solution must be lazy in order to work with an infinite range:

(= (take 5 (__ + (range))) [0 1 3 6 10])

After a quick google to determine how I can return a lazy sequence, I came across the clojure docs for the lazy-seq function but I did not fully grasp what the explanation meant:

Takes a body of expressions that returns an ISeq or nil, and yields
a Seqable object that will invoke the body only the first time seq
is called, and will cache the result and return it on all subsequent
seq calls. See also - realized?

In my haste to complete the problem, I simply thought that all I had to do was wrap an existing function in a lazy-seq call and everything would just magically work. Here is my original attempt at solving the question:

All I have done is wrap a call to reduce on line 3 in a call to lazy-seq on line 2 which of course resulted in a stackoverflow because reduce does not return a lazy sequence and instead will execute until the computation is complete which in this case is never because reduce is being called on an infinite list. I needed to go back to the drawing board. What I needed to do was create a sequence that is realized as needed.

After a bit more digging it became apparent that lazy sequences are not really sequences that simply return a list of elements but are more comparable to iterators in other languages such as java that conform to an a sequence api. With a lazy sequence it is up to the client or the calling function to decide how many elements to consume. A lazy function is just a data structure that waits until you ask it for a value. So how does one return a lazy sequence?

The Simplest Example Imaginable

Below is the simplest example that I could think of:

  • Line 1 defines a function named add-one that accepts a sequence that could be infinite.
  • Line 2 is where the action happens, the lazy-seq macro is employed to create a lazy sequence whose body is an expression. The body of this lazy sequence uses the cons function which takes an element and a sequence and returns a new sequence by prepending the element to the sequence. The element in this case is the result of incrementing the first element of the list with the expression:
(inc (first se))

The sequence that the element is prepended to is the result of recursively calling the add-one function again. If a lazy sequence was not returned from add-one then add-one would be called over and over again until a stackoverflow exception was thrown. The whole point of the lazy-seq macro is to circumvent this and ensure that the function will only be accessed as the elements are requested. A sequence that is returned from lazy-seq contains the requested element and a pointer or a function to get the next element.

Another mistake I made in my original misunderstanding was that cons was returning a lazy sequence or was somehow involved in returning a lazy sequence. cons does not return a lazy sequence (the thing that does that is lazy-seq), but it is a good tool to use as part of buidling a lazy sequence.

The 4clojure Solution

Before I show my solution, I will remind you of the original 4Clojure problem.

Write a function which behaves like reduce, but returns each intermediate value of the reduction.
Your function must accept either two or three arguments, and the return sequence must be lazy.

(= (take 5 (__ + (range))) [0 1 3 6 10])

The question is really asking the user to recreate clojure’s core reductions function. Below is my solution that took me quite a while to get to but I am very pleased with.

The important things to note are that an infinite range is passed into the named anonymous function my-reduct on line 12 but becase a lazy sequence is returned on line 8 and line 11, the resulting sequence will only be returned as requested. Because the take function is used on line line 1 to request the lazy sequence with an arguement of 5, the function will only be called 5 times.

I found lazy sequences very difficult to wrap my head around and part of that process was to write this post.

If I have inaccuately described anything or I can improve this explanation then please leave a comment below.

ember.js - Animating Deletes With the BufferedProxy

I encountered an interesting scenario in my day job that initially had me scratching my head for a way to solve and seems worthy of a post. The real world application that I am working on actually has real world todos! The application lists the todos as is illustrated in the screenshot below and they can be completed in the time honoured fashion of clicking the checkmark or tick on the left hand side of the todos:

One of the requirements for this list is to display only the todos that have not been completed. Ember does make this ridiculously easy and we could even bind our list to the new filterBy computed property like this:

Here is a working jsbin that shows how easy this is.

The problem with the approach outlined in the jsbin is that, when the checkbox is checked and the bindings are flushed, the view that has been rendered for that particular todo gets instantly destroyed. There are currently no hooks that allow for animations in ember and this is particularly true when it comes to removing items from a bound list or indeed destroying views. This leaves you as the developer to resort to some sort of trickery. I would personally like to see an extra runloop queue or a view lifecycle event that returned a promise. Only when the promise has been resolved would the view’s destroy method be called. There was some discussion about this before ember 1.0 was released but we are heading towards ember 1.6 and I think we need to discuss this again.

My requirements for this todo list are even more convulted because I want to visually indicate that the todo has been checked and also give the user a 4 second chance to change their mind and uncheck the todo before the todo disappears from the list. As you can see from the jsbin, the moment the checkbox is checked, the todo just vanishes instantly. So, what can be done?

The BufferedProxy

The application that I am working on has been in active development for over a year and we still use version 0.14 of ember-data which was the last version of ember-data that was released before the much publicised reboot. I have made a solemn oath to my client that I would not even think of upgrading until ember-data reached 1.0 after the pain we suffered with other ember-data upgrades. One problem with this version of ember-data (I cannot speak for the latest version) is that it is terribly annoying to work with models that are left in isDirty or invalid states. I am not going to go into detail here but the state machine assertions are a source of depression for anybody that has had the experience. I have since patched up my slightly bastardised version of ember-data but what makes life easier is to actually not bind to an ember-data model and only create the model when you are absolutely sure that your model is ready to be persisted.

The BufferedProxy has been an absoute god send for my productivity with this version of ember-data. The BufferedProProxy is a mixin that I mixin into controllers that uses ember’s lesser know method_missing like feature (checkout unknownProperty and setUnknownProperty) to store changes to a controllers model in a buffer that you can either set explicity or you can cancel them completely without the model ever being changed. This is perfect for ember-data models. We can only apply the modifications if we are happy that the model is valid or that the user has not navigated away, cancelled, left the building, been set on fire, shot, kidnapped etc.

The BufferedProxy is ideal for my requirements which are:

  1. I want to delay setting isFinished on the model so that the user has a few seconds to change their mind and undo the change.
  2. I want to animate the deletion of the todo view to make it look a bit more polished than it is in the original bin where it instantly disappears.

Here is a a working jsbin of what I ended up with. There is a delay in persisting the change that gives the user an opportunity to cancel and the delete has a basic animation. If you re-check a todo, then it will not be removed or if you leave it checked, a basic animation will kick in and the todo will be removed when the animation is completed.

The first thing to notice is that I am using render instead of a component to render my todos which gives me a clean separation between controller and view which I think is better for this situation because I want to mix in the BufferedProxy into the controller and not the component even though I am pretty sure it would work with a component:

In the TodoController, I observe the isFinished property for changes in the following handler:

  • On line 4 I am checking if the todo isFinished and the hasBufferedChanges property lets me know that there are changes in the BufferedProxy that are ready to be applied.
  • On line 5, I create an inline function that will be called after a delay. The delay is to give the user the opportunity to change their mind and cancel the change.
  • The inline function on line 5 will raise an event on line 6 to the TodoView that is listening for these events and will give the view the opportunity to animate the delete. The timer is cancelled on line 7.
  • On line 14 we handle the case where the user has changed their mind. We clear the timer and then persist the change if the BufferedProxy has changes.

Below is the TodoView that handles the animateFinish event that is triggered by the TodoController on line 6 of the above gist.

  • line 5 sets up the event handler for animateFinsh events that are raised by the TodoController.
  • lines 10 and 11 uses jQuery’s fadeOut for some basic animation which calls a controller action to persist the changes when the animation has finished.

All that remains is to show the completeFinish action that is called in the above gists:

  • On line 4 of the above gist, I apply the changes from the BufferedProxy by calling the applyBufferedChanges method which will transfer the changes from the buffer to the model.
  • On line 6, I persist the model.

I hope if nothing else that this post has highlighted the complexities of adding animations to ember and especially in the destroy phase. Animation is a must have for modern day javascript single page applications and not a nice to have. I’m not sure if this is on the ember roadmap but I think this is something that can no longer be ignored. I would like to see a run loop queue or view lifecycle event that allowed me to return a promise with the view’s destroy method only being called when that promise has resolved.

Please comment if any of the above has unsettled you.

ember.js - Computed Property for All Keys of an Object

I came across a situation today where I wanted to create a computed property which would recalculate itself every time one of the properties of an object changed. The lazy and tedious thing to do would have been to simply list them out like this:

This is tedious for a number of reasons but the main one is that I have to remember to update the dependent key list, every time the object changes.

Below is the solution I came up with:

This is slightly more convoluted that I would have liked as I could not create a real property because I need the context argument to be an instance on line 3 so that I can call Ember.keys on this instance. Instead of creating an actual property, I use defineProperty on line 11 to create the property at runtime. I create the dependant key list by using Ember.keys to iterate over the subject’s properties and then call Ember.computed.apply to equal the call from the gist at the top of the post.

Below is the calling code:

It is slightly dissatisfying in that I could not create an actual property and if you can suggest a better way then please leave a comment below.

Partial Functions With Bind

I gave a talk last night at the local Glasgow JavaScript user’s group and I was a little surprised that not everybody had heard of the bind method which is a member of the Function.prototype. I put the blame for this firmly at internet explorer’s door because most of us still have to deal with intenet explorer 8 and below which runs ECSMAScript 3 and bind is only available from ECSMAScript 5 and beyond. Internet Explorer has a nuclear half life that is much longer than Chernobyl and will probably still be emitting deadly radiation for many years to come.

The bind method is one solution to having to declare a self or a that or a _this variable when we want to specify what this will be when a function is executed. Below is an example where two new functions (lines 12 and 16) are created in order to set what the context will be when the two variations of the sayHello methods are executed for a particular person:

We can tidy this up by using Function.prototype.bind:

Lines 12 and 13 use bind to create a new function that when called, has its this value set to whatever the given value between the parenthisis is.

It is worth mentioning that if you want to use bind in older browsers then you still can by using a polyfill such as this and I would emplore everybody who is reading this to do just that if you are not already doing so.

Partial Functions

What is less known about bind is that we can use it to make a function with pre-specified initial arguments. These arguments if supplied, follow the provided this value and are inserted at the start of the arguments passed to the target function.

Let me illustrate this with an example, the code below contains a chain of promises with almost identical resolve handlers. I know I could use RSVP.hash for this but I am merely illustrating the point:

On lines 4, 8 and 12 of the above gist, I am setting a property of the self reference to the data returned from the async call and on lines 6, 10 and 14 I am calling getJSON with a different url that will return the next resource. The only things that are different are the property name and the string literal for the next url.

The function below would DRY this up nicely:

The only problem I have is that the above function’s argument list does not fit into my resolve handlers argument list where only one argument is passed containing the result of the async call. The answer is of course to use bind and specify some pre-specified initial arguments:

On lines 9 - 11, I am supplying some pre-specified arguments to the bind function that will be bound to the first two arguments of the continuer function when the specific instances are called. I think this DRYs up things quite nicely.

ES6 Generators - Synchronous Looking Asynchronous Code

In my opinion, the most challenging thing about programming JavaScript in the browser is having to deal with asynchronicity. As developers, we would much rather write code like this:

Our concious mind or left brain operates in what can be thought os as an abstraction of reality. The universe is actually an infinitesimal series of events happening simultaneously at the same time that our concious mind cannot grasp, it thinks sequentially or linearly and we process one thought at a time .We have created our computers to reflect this and for all intensive purposes the programs that we write are at their base level, a series of sequential inputs.

The key to writing good asynchronous code is to make it look synchronous.

The unfortunate thing is that when trying to deal with asynchronicity through the use of plain callbacks, our code has often ended up like the sample below which is often referred to as callback hell:

I could almost live with the code above if it weren’t for error handling. Having to pass callback and errBack handlers to each call quickly turns into a tangled mess.

It is worth stating that there is absolutely nothing wrong with callbacks and they are in fact the currency of asynchronous programming and the newer asynchronous abstractions like promises would not be possible without them. What is wrong with the above code is that it quickly becomes tightly coupled as you pass success callback and failure errorBack callbacks to each asynch call.

Promises are probably the most commonly used alternative to callback hell and below is a refactoring of the above using the RSVP promise library.

What is nice about the above code is that we have one catch handler on line 19 for all of the getJSON calls and in the event of an error in any of these calls, the error will be propagated forward to the next available error handler.

A promise represents the eventual outcome of an asynchronous or synchronous operation. A promise is also referred to as a thenable, which is really just an object or a function that defines a then meethod. A promise is either fulfilled in the good case or rejected in the not so good case. A promise will execute a success callback or an error callback on completion of the asynchronous or even synchronous work. Promises are possibly the most popular solution to callback hell that you will find in production code today. The solution I am going to show with es6 generators makes heavy use of promises to give the illusion of synchronicity.

So if promises are where we are then es6 generators appear to be where we are headed. I’ve just spent the morning playing about with the new generators feature of es6 and I was able to refactor the previous two code samples to the code below, which is almost synchronous looking:

Before I explain what is going on here, let me briefly explain what a geneator is. There are lots of other posts on this article but here is some code that explains how a base level simple generator works:

  • Line 1 Defines a generator function using the new * syntax. A generator function is a function that returns an iterator which can be thought of as an object that moves the target object on to the next value in a sequence. This iterator exposes a next() interface to aid with iteration. A javascript iterator yields successive nextResult objects which contain the yielded value (line 12) and a done flag (line 10) to indicate whether or not all of the values have been returned.

  • Line 11 assigns a returned nextResult object to the local object which we can check on each iteration of the loop. When next is called, the iterator returned from the oneToThree function points to the next yield statement (lines 2- 5) and returns the value. If there are no more yield statements then the done flag is set to true.

It is also possible to send values back to the iterator by calling next with an argument:

In the above example, the return of the yield statement is passed back to the iterator by calling next with the last value that was returned from the iterator and this value is then assigned to the variable on the left hand side of the yield statement. Using this technique, the variables users, contacts and companies on lines 17, 18 and 19 can all be assigned values by passing arguments via next.

If you have grasped this two way communication between the calling code and iterator then you can see that the code above code is not a million miles away from the code below where self.users, self.contacts and self.companies on lines 6, 7 and 8 below are seemingly assigned to the results of asynchronous calls.

The key to this magic is the async function call on line 4 of the above code and below is the listing of this async function.

I lifted the async method from the Q promise library with some minor changes to get it to integrate with promises defined in RSVP that I use on a day to day basis with ember.

Generators work syncnronously which is different than what I originally thought, I thought there was some dark witchcraft that made the calls work asynchronously. This is not true, the key to making them work asynchronously is to return a promise or another object that describes an async task. In this example, each yield statement returns a promise like this:

The basic premise that makes this possible, is that we are passing the generator function to async like this:

The async function assigns the iterator that is returned from this generator function to a local variable on line 16:

The async function then uses the often overlooked ability to pass arguments via the bind function:

The callback and errback function pointers both reference the continuer function on line 2 but because of the way they were declared with the bind function, either next or throw will be bound to the verb parameter depending on which function pointer is invoked.

When the callback reference is invoked on line 20 with the return callback(); statement, the verb argument will have a value of next which will result in the generator asking for the first value from the iterator or generator function.

Which is the equivelant of:

The arg argument will be undefined at this stage because no value has been returned from the iterator. When the first value is asked of the iterator with the above code, the code below will be executed.

The getJSON method returns a promise that is fulfilled or rejected with respect to the successful completion or rejection of the aynchronous ajax call. Once the promise has been created, the yield statement will suspend execution of this function and return the promise created via the getJSON function back to the calling code or async function.

The result variable has been assigned a nextResult object that was described earler that will have a done flag value of false and a value property that will point to the promise created in the iterator.

The code will now encounter this if statment.

As not all of the yield statements have been executed, the done flag will be false and execution will continue after the else statement. The next line calls RSVP.Promise.resolve and passes result.value as an argument which at this stage is still the promise returned from getJSON. RSVP.Promise.resolve in this instance just checks that result.value is a promise and if not, it creates a promise that will become resolved with the passed value. In this instance, result.value is a promise so no Promise cast is required.

This is where things get complicated so keep your wits about you. This result.value promise which was created from the getJSON(‘/users’) call will resolve successfully and result in the callback being executed in the promises’ then method below:

The callback will call the continuer function again:

The callback will call the continuer and bind next to the verb paramter but this time, the arg value will be bound to whatever was returned from the getJSON(‘/users’) asynchronous call. This means that whenever next is called:

The arg value will be returned to the iterator function as was explained in the two way communication section earlier and be assigned to the self.users property that was on the left hand side of the yield statement that called getJSON(‘/users’).

This means that self.users below will now be assigned the result of the async ajax call albeit in a rather round about fashion.

The async function then continues in this recursive manner until all of the yield statements have been called and all of the properties on the left hand side of the yield statements below have been assigned:

Once all the yield statements have been executed, the last nextResult returned from the iterator will point to line 5 of the above code sample and will have the done flag set to true. This means that execution will continue in the if path of the code below and the value of the last nextResult object will be returned to the calling code and the process has completed:

Writing this blog post has completelly demystified what initially seemed like black magic to me. I hope it has done the same for you.