Features

  • page
 

Learn about the features that will help you get your app done.

All-in-one stack

DoneJS offers everything you need to build a modern web app. It comes with a module loader, build system, MVVM utilities, full testing layer, documentation generator, server side rendering utilities, a data layer, and more. Its completeness is itself a feature.

There's no mixing and matching pieces of your stack. Just npm install and get started.

Choosing a modern stack is not at all simple or straightforward.

  1. What types of tools do you want? Server-side rendering? What is a virtual DOM? Do I need one? MVVM or Flux? Should I set up testing infrastructure? Documentation?

  2. Choose all your pieces. The good news is, you have many choices. The bad news is, you have many choices. React, Angular, or Backbone? Require.js, browserify, or jspm? Jasmine or QUnit? What tool will run my tests?

  3. Finally, you have to make sure your chosen tools work together effectively. Does require.js work well with Angular? Does Karma work with Browserify? What about React and Babel?

DoneJS gives you a full solution. It's our mission to eliminate any ambiguity around choosing technology for building an app, so you spend less time tinkering with your stack, and more time actually building your app.

And as we've proven over the last 8 years, we'll keep updating the stack as the state of the art evolves over time.

Integrated layers

Just like Apple integrates the hardware and software for its devices, DoneJS integrates different technologies in a way that creates unique advantages that you can only get from using an integrated solution.

Cross-layer features

DoneJS makes it easier to do things that are not possible, or at best DIY, with competitor frameworks, by spanning technology layers. Here are a couple examples:

1. Server-side rendering

Server-side rendering (SSR), which you can read about in more detail in its section below, spans many layers to make setup and integration simple.

It uses hooks in data components to automatically notify the server to delay rendering, hot module swapping automatically integrates (no need to restart the server while developing), data is collected in an inline cache automatically and used to prevent duplicate AJAX requests. Support for these features is only possible because of code that spans layers, including can-connect, can-ssr, CanJS, and StealJS.

By contrast, React supports SSR, but you're left to your own devices to support delaying rendering, hot module swapping, and inline caching.

2. Progressive enhancement

You can mark a section of your template to be progressively loaded by wrapping it with <can-import>, like:

<can-import from="components/home">
  {{#if(isResolved)}}
  <home-page/>
  {{/if}}
</can-import>

and then running donejs build.

<can-import> has hooks that notify the build time algorithm to create a bundle for this template fragment and its dependencies. This feature spans StealJS, steal-build, CanJS, and done-cli.

Story-level solutions

Another advantage of the integration between DoneJS' parts is the ability to solve development problems on the level of stories rather than just features.

Solving a story means a packaged solution to a development problem, where several features across layers converge to solve the problem from start to finish. Here are several examples of stories that DoneJS solves:

  1. Modular workflow - DoneJS makes it possible for teams to design and share components easily. Starting with generators, users can create modlets that encapsulate everything a custom element needs, easily add documentation and testing, then use npm import and export to easily share the modules with other developers, no matter what module format they're using.

  2. Performance - DoneJS was designed from the start to solve the performance story, packaging server-side rendering, progressive loading, worker thread rendering, data layer caching, and more, all under one roof.

  3. Maintainability - testing, docs, MVVM

  4. Developer efficiency - zero-config npm imports, hot module swapping, ES6 support

Feature comparison

Performance Features

DoneJS is configured for maximum performance right out of the box.

Server-Side Rendered

DoneJS applications are written as Single Page Applications, and are able to be rendered on the server by running the same code. This is known as Isomorphic JavaScript, or Universal JavaScript.

Server-side rendering (SSR) provides two large benefits over traditional single page apps: much better page load performance and SEO support.

SSR apps return fully rendered HTML. Traditional single page apps return a page with a spinner. The benefit to your users is a noticeable difference in perceived page load performance:

donejs-server-render-diagram.svg

Compared to other server-side rendering systems, which require additional code and infrastructure to work correctly, DoneJS is uniquely designed to make turning on SSR quick and easy, and the server it runs is lightweight and fast.

Page load performance

Server-side rendered SPAs can load pre-rendered HTML immediately. They can also cache HTML and serve it from a CDN.

Traditional SPAs must load the JS, execute, request data, and render before the user sees content.

SEO

Search engines can't easily index SPAs. Server-side rendering fixes that problem entirely. Even if Google can understand some JavaScript now, many other search engines cannot.

Since search engines see the HTML that your server returns (if you want search engines to find your pages) you'll want Google and other search engines seeing fully rendered content, not the spinners that normally show after initial SPAs load.

How it works

DoneJS implements SSR with a single-context virtual DOM utilizing zones.

Single context means every request to the server reuses the same context: including memory, modules, and even the same instance of the application.

Virtual DOM means a virtual representation of the DOM: the fundamental browser APIs that manipulate the DOM, but stubbed out.

A zone is used to isolate the asynchronous activity of one request. Asynchronous activity like API requests are tracked and DoneJS' SSR will wait for all to complete, ensuring that the page is fully rendered before showing HTML to the user.

When using DoneJS SSR, the same app that runs on the client is loaded in Node. When a request comes in:

  1. The server handles the incoming request by reusing the application that is already running in memory. It doesn't reload the application which means the initial response is very fast.
  2. The app renders content the same way it would in the browser, but with a mocked out virtual DOM, which is much faster than a real DOM.
  3. The server creates a new zone to wait for all your asynchronous data requests to finish before signaling that rendering is complete.
  4. When rendering is complete, the virtual DOM renders the string representation of the DOM, which is sent back to the client.

Since SSR produces fully rendered HTML, it's possible to insert a caching layer, or use a service like Akamai, to serve most requests. Traditional SPAs don't have this option.

Rather than a virtual DOM, some other SSR systems use a headless browser on the server, like PhantomJS, which uses a real DOM. These systems are much slower and require much more intensive server resources.

Some systems, even if they do use a virtual DOM, require a new browser instance entirely, or at the very least, reloading the application and its memory for each incoming request, which also is slower and more resource intensive than DoneJS SSR.

Prepping your app for SSR

Any app that is rendered on the server needs a way to notify the server that any pending asynchronous data requests are finished, and the app can be rendered.

React and other frameworks that support SSR don't provide much in the way of solving this problem. You're left to your own devices to check when all asynchronous data requests are done, and delay rendering.

In a DoneJS application, asynchronous data requests are tracked automatically. Using can-zone, DoneJS keeps a count of requests that are made and waits for all of them to complete.

View the Documentation View the Guide

Server-side rendering is a feature of done-ssr

Progressive Loading

When you first load a single page app, you're typically downloading all the JavaScript and CSS for every part of the application. These kilobytes of extra weight slow down page load performance, especially on mobile devices.

DoneJS applications load only the JavaScript and CSS they need, when they need it, in highly optimized and cacheable bundles. That means your application will load fast.

There is no configuration needed to enable this feature, and wiring up progressively loaded sections of your app is simple.

How it works

Other build tools require you to manually configure bundles, which doesn't scale with large applications.

In a DoneJS application, you simply mark a section to be progressively loaded by wrapping it in your template with <can-import>.

{{#eq page 'home'}}
<can-import from="components/home">
  {{#if(isResolved)}}
  <home-page/>
  {{/if}}
</can-import>
{{/eq}}
{{#eq(page, 'chat')}}
<can-import from="components/chat">
  {{#if(isResolved)}}
  <chat-page/>
  {{/if}}
</can-import>
{{/eq}}

Then you run the build.

donejs build

A build time algorithm analyzes the application's dependencies and groups them into bundles, optimizing for minimal download size.

That's it! No need for additional configuration in your JavaScript.

View the Documentation View the Guide

Progressive Loading is a feature of StealJS with additional support via the <can-import> tag of CanJS

Caching and Minimal Data Requests

DoneJS improves performance by intelligently managing the data layer, taking advantage of various forms of caching and request reduction techniques.

Undoubtedly, the slowest part of any web application is round trips to the server. Especially now that more than 50% of web traffic comes from mobile devices, where connections are notoriously slow and unreliable, applications must be smart about reducing network requests.

Making matters worse, the concerns of maintainable architecture in single page applications are at odds with the concerns of minimizing network requests. This is because independent, isolated UI widgets, while easier to maintain, often make AJAX requests on page load. Without a layer that intelligently manages those requests, this architecture leads to too many AJAX requests before the user sees something useful.

With DoneJS, you don't have to choose between maintainability and performance.

DoneJS uses the following strategies to improve perceived performance (reduce the amount of time before users see content rendered):

  • Fall through caching - Cache data in localStorage. Automatically show cached data immediately, but look for updates on the server in the background and merge changes.
  • Combining requests - Instead of making multiple, independent requests to the same API, combine them into a single request.
  • Request caching - Reduce the number and size of server requests by intelligently using cached datasets.
  • Inline cache - Use data embedded in the page response instead of making duplicate requests.

How it works

can-connect makes up part of the DoneJS model layer. Since all requests flow through this data layer, by making heavy use of set logic and localStorage caching, it's able to identify cache hits, even partial hits, and make the most minimal set of requests possible.

It acts as a central hub for data requests, making decisions about how to best serve each request, but abstracting this complexity away from the application code. This leaves the UI components themselves able to make requests independently, and with little thought to performance, without actually creating a poorly performing application.

Fall through caching

Fall through caching serves cached data first, but still makes API requests to check for changes.

The major benefit of this technique is improved perceived performance. Users will see content faster. Most of the time, when there is a cache hit, that content will still be accurate, or at least mostly accurate.

This benefits two types of situations. First is page loads after the first page load (the first page load populates the cache). This scenario is less relevant when using server-side rendering. Second is long-lived applications that make API requests after the page has loaded. These types of applications will enjoy improved performance.

By default, this is turned on, but can easily be deactivated for data that should not be cached.

Here's how the caching logic works:

  1. When the application loads, it checks for available cache connections.
  2. When a request is made, it checks for a cache hit.
  3. If there is a hit, the request is completed immediately with the cached data.
  4. Regardless of a hit or miss, a request is made in the background to the actual API endpoint.
  5. When that response comes back, if there was a difference between the API response data and the cache hit data, the initial request promise's data is updated with the new data. Template data bindings will cause the UI to update automatically with these changes.
  6. Updated response data is automatically saved in the cache, to be used for future requests - whether that's in the current page session, or when the user comes back in the future.
Combining requests

Combining requests combines multiple incoming requests into one, if possible. This is done with the help of set algebra.

DoneJS collects requests that are made within a few milliseconds of each other, and if they are pointed at the same API, tries to combine them into a single superset request.

For example, the video below shows an application that shows two filtered lists of data on page load - a list of completed and incomplete todos. Both are subsets of a larger set of data - the entire list of todos.

Combining these into a single request reduces the number of requests. This optimization is abstracted away from the application code that made the original request.

Request caching

Request caching is a type of caching that is more aggressive than fallthrough caching. It is meant for data that doesn't change very often. Its advantage is it reduces both the number of requests that are made, and the size of those requests.

There are two differences between request and fallthrough caching:

  1. Cached data is not invalidated.

Once data is in the cache, no more requests to the API for that same set of data are made. You can write code that invalidates the cache at certain times, or after a new build is released.

  1. The smallest possible request is made, based on the contents of the cache, and merged into a complete result set.

The request logic is more aggressive in its attempts to find subsets of the data within the cache, and to only make an API request for the subset NOT found in the cache. In other words, partial cache hits are supported.

The video below shows two example scenarios. The first shows the cache containing a superset of the request. The second shows the cache containing a subset of the request.

Inline cache

Server-side rendered single page apps (SPAs) have a problem with wasteful duplicate requests. These can cause the browser to slow down, waste bandwidth, and reduce perceived performance.

  1. When a page is rendered server-side, it makes data requests on the server to various APIs.
  2. After the page's rendered HTML loads in the client, the SPA is loaded in the client, so that subsequent requests are handled within the SPA.
  3. The SPA will want to re-request for the same data that was already requested on the server.

DoneJS solves this problem with an inline cache - embedded inline JSON data sent back with the server rendered content, which is used to serve the initial SPA data requests.

DoneJS uniquely makes populating and using the inline cache easy. Using plain XHR:

  1. Tells the SSR server to wait for a promise to resolve before rendering.
  2. Collects data from each promise and uses it to populate the inline cache.

For example:

Component.extend({
  tag: "user-name",
  view: stache("{{user.name}}"),
  ViewModel: {
    init: function () {
      User.get({ id: this.id });
    }
  }
});

The model layer seamlessly integrates the inline cache in client side requests, without any special configuration.

While this flow would be possible in other SSR systems, it would require manually setting up all of these steps.

This video illustrates how it works.

View the Documentation View the Guide

Caching and minimal data requests is a feature of can-connect

Minimal DOM Updates

The rise of templates, data binding, and MV* separation, while boosting maintainability, has come at the cost of performance. Many frameworks are not careful or smart with DOM updates, leading to performance problems as apps scale in complexity and data size.

DoneJS' view engine touches the DOM more minimally and specifically than competitor frameworks, providing better performance in large apps and a "closer to the metal" feel.

Take the TodoMVC application as an example. If you measure how long it takes DoneJS and React to render the same number of todos you'll see the performance advantage of minimal DOM updates. In fact, we did just that and here's the result:

Measuring React and DoneJS using TodoMVC. For a small set of todos the difference is negligible but as the number increases the gap widens to the point where React is 6 times slower than DoneJS when rendering 1000 todos.

You can run this test for yourself at JS Bin.

How it works

Consider the following template:

{{#rows}}
<div>{{name}}</div>
{{/rows}}

And the following change to its data:

rows[0].name = 'changed'; // change the first row's name

In DoneJS, which uses the can-stache view engine, that would:

  1. Trigger an event (because of the DefineMap object observe API)
  2. The event invokes a data binding event handler in the template layer
  3. The handler immediately results in the following code being run:
textNode.nodeValue = 'changed';

In Backbone, you would need to manually re-render the template or roll your own rendering library.

In Angular, at the end of the current $digest cycle, that would result in an expensive comparison between the old rows array and the new one to see what properties have changed. After the changed property is discovered, the specific DOM node would be updated.

In React, that would result in the virtual DOM being re-rendered. A diff algorithm comparing the new and old virtual DOM would discover the changed node, and then the specific DOM node would be updated.

Of these four approaches, DoneJS knows about the change the quickest, and updates the DOM the most minimally.

To see this in action run the test embedded below that shows how DoneJS, React and Angular compare when updating the DOM when a single property changes: Measuring DoneJS, React and Angular rendering a simple property change.

You can run this test yourself at JS Bin

With synchronously observable objects and data bindings that change minimal pieces of the DOM, DoneJS aims to provide the best possible mix between powerful, yet performant, templates.

can-stache Documentation can-map Documentation

Minimal DOM updates is a feature of CanJS

Memory Safety

Preventing memory leaks is a critical feature of any client-side framework. The biggest source of memory leaks in client-side applications is event handlers. When adding an event handler to a DOM node you have to be sure to remove that handler when the node is removed. If you do not, that DOM node will never be garbage collected by the browser.

How it works

When event listeners are created in a DoneJS application using template event binding or by binding using Controls, internally these handlers are stored. This looks like:

<a href="/todos/new" on:click="newTodo()">New Todo</a>

for templates and:

var TodoPage = Control.extend({
  "a click": function(){
    this.addTodo();
  }
})

for controls. Internally CanJS listens for this element's "removed" event. The "removed" event is a synthetic event that will be used to:

  • Remove all event listeners.
  • Remove DOM data associated with the element.
  • Remove any template bindings, such as computes bound to text within the template.

CanJS is different from other frameworks in that it will clean up its own memory event when not using the framework to tear down DOM. For example, if you were to do:

todoAnchor.parentNode.removeChild(todoAnchor);

The event listener created would still be torn down. This is because CanJS uses a MutationObserver to know about all changes to the DOM. When it sees an element was removed it will trigger the "removed" event, cleaning up the memory.

Worker Thread Rendering

Worker thread rendering increases the performance of your application. It essentially allows your application to run entirely within a Web Worker, freeing the main thread to only update the DOM.

Since much of the work is offloaded from the main thread, applications will feel snappy, even while heavy computations are taking place.

How it works

Templates first render in a lightweight Virtual DOM in a Web Worker. Changes are diffed and sent to the main thread to be applied to the real DOM. The main thread is only notified when there are changes to the DOM.

The most expensive part of a web application - DOM updates - are separated from application logic, which means your application can continue to run while DOM reflows occur.

By default, browsers use only a single thread of execution.

A traditional single threaded javascript application With a single thread only one operation can occur at a time

This means that performance problems in any area (expensive computations, DOM rendering, processing a large AJAX response, etc) can block the entire application, leaving the browser feeling "frozen".

With worker thread rendering, DOM updates and application logic are run in parallel threads.

A javascript application using a worker thread Using a worker thread application logic can still occur while the DOM is rendered. This could nearly double the number of operations per second.

Due to this parallelization, performance problems that may have caused noticeable issues in a single thread will likely not cause any noticeable issues while running in separate threads.

Adding worker thread rendering only requires changing one line. Change the main attribute of your page's script tag from:

<script src=”node_modules/steal/steal.js” main=”my-app!done-autorender”></script>

to

<script src=”node_modules/steal/steal.js” main=”my-app!done-worker-autorender”></script>

At this time, no other framework besides DoneJS, including Angular or React, supports worker thread rendering out of the box.

You spend less time worrying about performance micro-optimizations,

View the Documentation

Worker Thread Rendering is a feature of the worker-render project.

Deploy to a CDN

DoneJS makes it simple to deploy your static assets to a CDN (content delivery network).

CDNs are distributed networks of servers that serve static assets (CSS, JS, and image files). You only push your files to one service, and the CDN takes care of pushing and updating your assets on different servers across the country and globe. As your app scales CDNs will keep up with the demand, and help support users regardless if they are in New York or Melbourne.

User request across the globe without a CDN. Without a CDN, requests will take longer to fulfill if the user is located further away from your servers.


User request across the globe with a CDN. With a CDN, requests can be fulfilled much quicker. Users are served content from the servers located nearest to them.

How it works

It's widely known that CDNs offer the best performance for static assets, but most apps don't use them, mainly because its annoying: annoying to automate, configure, and integrate with your build process.

DoneJS comes with integrations with S3 and Firebase (popular CDN services) that make configuring and deploying to a CDN dirt simple.

  1. You sign up for Firebase.
  2. You run: donejs add firebase in your terminal. It asks a few questions, most of which you can accept the default answer.
  3. You run donejs deploy.

That's it. Now when you run your server in production mode, all static assets (CSS, JS, images, etc) are served from the CDN.

Even better, you can set up continuous deployment, so that TravisCI or other tools will deploy your code, including pushing out your latest static files to the CDN, automatically.

View the Guide

Usability features

DoneJS is used to make beautiful, real-time user interfaces that can be exported to run on every platform.

iOS, Android, and Desktop Builds

Write your application once, then run it natively on every device and operating system. You can make iOS, Android, and desktop builds of your DoneJS application with no extra effort.

Our DoneJS Chat App running as a macOS desktop app and inside an iOS emulator.

How it works

For iOS and Android builds, DoneJS integrates with Apache Cordova to generate a mobile app that is ready to be uploaded to Apple's App Store or Google Play.

For native desktop applications, DoneJS integrates with Electron or NW.js to create a native macOS, Windows, or Linux application.

Adding this integration is as simple as running

donejs add cordova
donejs add nw
donejs add electron
donejs build

With these simple integrations, you can expand your potential audience without having to build separate applications.

View the Documentation View the Guide

Cordova, Electron, and NW.js integration are features of the steal-electron, steal-cordova, and steal-nw projects.

Supports All Browsers, Even IE9+

DoneJS applications support Internet Explorer 8 minimal additional configuration. You can even write applications using most ES6 features that run on IE9+, using the built-in babel integration.

Many people won't care about this because IE9+ is on its way out, which is a very good thing!

But it's not quite dead yet. For many mainstream websites, banks, and e-commerce applications, IE9+ continues to hang around the browser stats.

And while other frameworks like AngularJS and EmberJS don't support IE9+, DoneJS makes it easy to write one app that runs everywhere.

View the Guide

Real Time Connected

DoneJS is designed to add real-time behavior to applications using any backend technology stack.

Socket.io provides the basics to add real-time capabilities to any JavaScript application, but the challenge of integrating real-time updates into your code remains.

When new data arrives, how do you know what data structures to add it to? And where to re-render? Code must be written to send socket.io data across your application, but that code becomes aware of too much, and therefore is brittle and hard to maintain.

DoneJS makes weaving Socket.io backends into your UI simple and automatic.

How it works

DoneJS' model layer uses set logic to maintain lists of data represented by JSON properties, like a list of todos with {'ownerId': 2}. These lists are rendered to the UI via data bound templates.

When server-side updates are sent to the client, items are automatically removed or added to any lists they belong to. They also automatically show up in the UI because of the data bindings.

All of this happens with about 4 lines of code.

const socket = io('https://chat.donejs.com');
socket.on('messages created',
  order => messageConnection.createInstance(order));
socket.on('messages updated',
  order => messageConnection.updateInstance(order));
socket.on('messages removed',
  order => messageConnection.destroyInstance(order));

Follow the guide to see an example in action. View the can-connect real-time documentation here.

View the Documentation View the Guide

Real time connections is a feature of the can-connect project.

Pretty URLs with Pushstate

DoneJS applications use pushstate to provide navigable, bookmarkable pages that support the back and refresh buttons, while still keeping the user on a single page.

The use of pushstate allows your apps to have "Pretty URLs" like myapp.com/user/1234 instead of uglier hash-based URLs like myapp.com#page=user&userId=1234 or myapp.com/#!user/1234.

Wiring up these pretty URLs in your code is simple and intuitive.

How it works

Routing works a bit differently than other libraries. In other libraries, you might declare routes and map those to controller-like actions.

DoneJS application routes map URL patterns, like /user/1, to properties in our application state, like {'userId': 1}. In other words, our routes will just be a representation of the application state.

This architecture simplifies routes so that they can be managed entirely in simple data bound templates, like the following example:

{{#switch page}}
  {{#case "home"}}
      <myapp-home></myapp-home>
  {{/case}}
  {{#case "users"}}
    {{#if(slug)}}
      <myapp-user-detail userId:bind="slug"></myapp-user-detail>
    {{else}}
      <myapp-users></myapp-users>
    {{/if}}
  {{/case}}
{{/switch}}

View the Guide

Pretty URLs and routing are features of the CanJS project.

Maintainability features

DoneJS helps developers get things done quickly with an eye toward maintenance.

Comprehensive Testing

Nothing increases the maintainability of an application more than good automated testing. DoneJS includes a comprehensive test layer that makes writing, running, and maintaining tests intuitive and easy.

DoneJS provides tools for the entire testing lifecycle:

How it works

Testing JavaScript apps is a complex task process unto itself. To do it right, you need many tools that have to work together seamlessly. DoneJS provides everything you need - the whole stack.

Generators

The DoneJS app generator command donejs add app creates a working project-level test HTML and JS file. Component generators via donejs add component cart create a test script and individual test page for each test.

Unit tests

Unit tests are used to test the interface for modules like models and view models. You can choose between BDD style unit tests with Jasmine, Mocha, or a more traditional TDD assertion style with QUnit.

Functional tests

Functional tests are used to test UI components by simulating user behavior. The syntax for writing functional tests is jQuery-like, chainable, and asynchronous, simulating user actions and waiting for page elements to change asynchronously.

test('destroying todos', function() {
  F('#new-todo').type('Sweet. [enter]');

  F('.todo label:contains("Sweet.")').visible('basic assert');
  F('.destroy').click();

  F('.todo label:contains("Sweet.")').missing('destroyed todo');
});
Event simulation accuracy

User action methods, like click, type, and drag, simulate exactly the sequence of events generated by a browser when a user performs that action. For example this:

F( ".menu" ).click();

is not just a click event. It triggers a mousedown, then blur, then focus, then mouseup, then click. The result is more accurate tests that catch bugs early.

Even further, there are differences between how IE and Safari handle a click. DoneJS tests take browser differences into account when running functional tests.

Running tests from the command line

DoneJS comes with a command line test runner, browser launcher, and reporting tool that integrates with any continuous integration environment.

No setup required, running a DoneJS project's test is as simple as running:

donejs test

You can run launch your unit and functional tests from the cli, either in headless browser mode, or via multiple real browsers. You can even launch browserstack virtual machines to test against any version of Android, Windows, etc.

The reporting tool gives detailed information about coverage statistics, and lets you choose from many different output formats, including XML or JSON files.

Mocking server APIs

Automated frontend testing is most useful when it has no external dependencies on API servers or specific sets of data. Thus a good mock layer is critical to writing resilient tests.

DoneJS apps use fixtures to emulate REST APIs. A default set of fixtures is created by generators when a new model is created. Fixtures are very flexible, and can be used to simulate error states and slow performing APIs.

import fixture from 'can-fixture';

const store = fixture.store([
  { name: 'Calisota', short: 'CA' },
  { name: 'New Troy', short: 'NT'}
],{});

fixture('/api/states/{short}', store);

export default store;
Simple authoring

Several DoneJS features converge to make authoring tests extremely simple.

Because of ES6 Module support, everything in a DoneJS app is a module, so a test can simply import the modules it needs - such as fixtures and module under test:

import restaurantStore from 'place-my-order/models/fixtures/restaurant';
import { ViewModel } from './list';

This means the test is small, isolated, and simple. Tests themselves are modules too, so they can be collected easily into sets of tests.

Because of the modlet pattern, each component contains its own working test script and test file, which can be worked on in isolation.

Because of hot module swapping, you can write, debug, and run tests without constantly reloading your page.

Other frameworks require a build step before tests can be run. These builds concatenate dependencies and depend on the specific order of tests running, which is a brittle and inefficient workflow.

Because DoneJS uses a client side loader that makes it simple to start a new page that loads its own dependencies, there is no build script needed to compile and run tests.

You just run the generator, load your modules, write your test, and run it - from the browser or CLI.

You spend less time messing with test infrastructure,
More information

The DoneJS testing layer involves many pieces, so if you want to learn more:

  • follow along in the Unit testing view model and fixtures section of the guide
  • see how to run tests and set up CI automation in the CI section of the guide
  • read about FuncUnit, the functional testing and asynchronous user action simulating library
  • read about syn - the synthetic event library
  • read about the Testee.js browser launcher, test runner, and reporting tool
  • read the can-fixture docs

Documentation

Documentation is critical for maintainability of any complex application. When your team adds developers, docs ensure minimal ramp up time and knowledge transfer.

Yet most teams either don't write docs, or they'll do it "later" - a utopian future period that is always just out of reach. Why? Because it's extra work to set up a tool, configure it, create and maintain separate documentation files.

DoneJS comes with a documentation tool built in, and it generates multi-versioned documentation from inline code comments. It eliminates the barrier to producing documentation, since all you have to do is comment your code (which most people already do) and run donejs document.

How it works

You write comments above the module, method, or object that you want to document:

/**
 * @module {function} utils/add
 * @parent utils
 *
 * The module's description is the first paragraph.
 *
 * The body of the module's documentation.
 *
 * @param {Number} first This param's description.
 * @param {Number} second This param's description.
 * @return {Number} This return value's description.
 */
export default function(){ ... };

Then run donejs document. A browsable documentation website will be generated.

A documentation website

DoneJS applications use DocumentJS to produce multi-versioned documentation. It lets you:

  • Write docs inline or in markdown files.
  • Specify your code's behavior precisely with JSDoc and Google Closure Compiler annotations - a well-known documentation syntax.
  • Customize your site's theme and layout.
  • Generate multi-versioned documentation.
  • Document CSS alongside JavaScript. You can even make a live style guide.

You can keep it simple like the example above, or you can customize your docs with many powerful features. In fact, this entire site and the CanJS site are generated using DocumentJS.

You spend less time messing with Documentation generators,

View the Documentation View the Guide

DoneJS Documentation is a feature of DocumentJS

Continuous Integration & Deployment

Continuous Integration (CI) and Continuous Deployment (CD) are must have tools for any modern development team.

CI is a practice whereby all active development (i.e. a pull request) is checked against automated tests and builds, allowing problems to be detected early (before merging the code into the release branch).

A pull request that breaks the build or fails tests Example of a GitHub pull request with Travis CI integrated. Warns users in advance of merges if their changes will break builds or fail tests.

CD means that any release or merges to your release branch will trigger tests, builds, and deployment.

Paired together, CI and CD enable automatic, frequent releases. CD isn't possible without CI. Good automated testing is a must to provide the confidence to release without introducing bugs.

DoneJS provides support for simple integration into popular CI and CD tools, like TravisCI and Jenkins.

How it works

Setting up continuous integration and deployment involves several steps:

  1. Writing tests
  2. Setting up a test harness that runs tests from the command line
  3. Creating simple scripts for running a build, test, and deploy
  4. Integrating with a service that runs the scripts at the proper times

Steps 1, 2, and 3 are the hard parts. Step 4 is simple. DoneJS supports in two main ways: proper test support and simple CLI commands.

Proper test support

DoneJS comes with comprehensive support for testing. The Testing section contains much more detail about testing support.

Generators create working test scripts right off the bat, and the plumbing for test automation is built into each project. Each modlet contains a skeleton for unit tests. All that is left for the developer to do is write tests.

Simple CLI commands

Another hurdle is creating automated build, test, and deployment scripts. Every DoneJS app comes with a build, test, and deployment one-liner: donejs build, donejs test, and donejs deploy.

Tool integration

Once the tests are written and the scripts are automated, integrating with the tools that automatically runs these scripts is quite simple. For instance, setting up Travis CI involves signing up and adding a .travis.yml file to the project:

language: node_js
node_js: node
script: npm start & npm test
before_install:
  - "export DISPLAY=:99.0"
  - "sh -e /etc/init.d/xvfb start"

View the CI Guide View the CD Guide

Modlets

The secret to building large apps is to never build large apps. Break up your application into small pieces. Then, assemble.

DoneJS encourages the use of the modlet file organization pattern. Modlets are small, decoupled, reusable, testable mini applications.

How it works

Large apps have a lot of files. There are two ways to organize them: by type or by module.

DoneJS Modlet Organization Diagram

Organization by module - or modlets - make large applications easier to maintain by encouraging good architecture patterns. The benefits include:

  • Each modlet contains its own demo page and its own test page. Getting a demo page running forces separation of concerns and isolated modules - hallmarks of good design. A standalone demo and test page makes it easy to work on pieces of your application in isolation.
  • Developers are more likely to update tests and documentation if they are sitting right next to the module they are editing. The test is not hidden in a tests folder that is more easily ignored.
  • You can develop the application without having to load the entire application and all of its tests on every change.

An example modlet from the in-depth guide is the order/new component. It has its own demo page and test page.

DoneJS generators create modlets to get you started quickly. To learn more about the modlet pattern, read this blog post.

View the Video View the Guide

Modlets are a feature of DoneJS generators.

npm Packages

DoneJS makes it easy to share and consume modules via package managers like npm and Bower.

You can import modules from any package manager in any format - CommonJS, AMD, or ES6 - without any configuration. And you can convert modules to any other format.

The goal of these features is to transform project workflows, making it easier to share and reuse ideas and modules of functionality across applications, with less hassle.

How it works

DoneJS apps use StealJS to load modules and install packages. This video introduces npm import and export in StealJS:

Zero config package installation

Unlike Browserify or Webpack, StealJS is a client side loader, so you don't have to run a build to load pages.

Installing a package in a DoneJS app via npm or bower involves no configuration. Install your package from the command line:

npm install jquery --save

Then immediately consume that package (and its dependencies) in your app:

import $ from "jquery";

Using require.js or other client side loaders, you'd have to add pathing and other information to your configuration file before being able to use your package. In DoneJS, this step is bypassed because of scripts that add config to your package.json file as the package is installed.

You can import that package in any format: CommonJS, AMD, or ES6 module format.

Convert to any format

DoneJS supports converting a module to any other format: CommonJS, AMD, or ES6 module format, or script and link tags.

The advantage is that you can publish your module to a wider audience of users. Anyone writing JavaScript can use your module, regardless of which script loader they are using (or if they aren't using a script loader).

Just create an export script that points to the output formats you want, along with some options:

var stealTools = require("steal-tools");
stealTools.export({
  system: {
    config: __dirname+"/package.json!npm"
  },
  outputs: {
    amd: {
      format: "amd",
      graphs: true,
      dest: __dirname + "/dist/amd"
    }
});

and run it from your command line:

node myexport.js
Modular workflow

In combination with other DoneJS features, npm module import and export make it possible for teams to design and share components easily.

Generators make it easy to bootstrap new modules of functionality quickly, and the modlet pattern makes it easy to organize small, self-contained modules. It's even easy to create tests and documentation for each module.

DoneJS enables a modular workflow, where pieces of small, reusable functionality can be easily created, shared, and consumed.

  1. Use generators to create a modlet
  2. Develop rich functionality
  3. Write tests and docs
  4. Export and publish it - internally or externally
  5. Consume it across applications

Similar to the way that the microservices architecture encourages reuse of APIs across applications, the modular workflow encourages reuse of self-contained modules of JavaScript across applications.

Imagine an organization where every app is broken into many reusable pieces, each of which is independently tested, developed, and shared. Over time, developers would be able to quickly spin up new applications, reusing previous functionality. DoneJS makes this a real possibility.

View the Documentation View the Guide

npm package support is a feature of StealJS

ES6 Modules

DoneJS supports the compact and powerful ES6 module syntax, even for browsers that don't support it yet. Besides future proofing your application, writing ES6 modules makes it easier to write modular, maintainable code.

import { add, subtract } from "math";

export function subtract(a, b) {
  return a - b;
}

How it works

DoneJS applications are actually able to import or export any module type: ES6, AMD and CommonJS. This means you can slowly phase in ES6, while still using your old code. You can also use any of the many exciting ES6 language features.

A compiler is used to convert ES6 syntax to ES5 in browsers that don't yet support ES6. During development, the compiler runs in the browser, so changes are happening live without a build step. During the build, your code is compiled to ES5, so your production code will run natively in every browser. You can even run your ES6 application in IE9+!

View the Documentation View the Guide

Pretty URLs and routing are features of the stealjs/transpile project.

Custom HTML Elements

One of the most important concepts in DoneJS is splitting up your application functionality into independent, isolated, reusable custom HTML elements.

The major advantages of building applications based on custom HTML elements are:

  1. Ease of page composition - Designers can do it! Non-developers can express complex behavior with little to no JavaScript required. All you need to build a new page or feature is HTML.
  2. Forced modularity - Because the nature of HTML, elements are isolated modules, custom HTML elements must be designed as small, isolated components. This makes them easier to test, debug, and understand.
  3. Reuse - Custom elements are designed to be reusable across pages and applications.

Consider the following example:

<order-model get-list="{ period='previous_week' }" value:to="*previousWeek" />
<order-model get-list="{ period='current_week' }" value:to="*currentWeek" />

<bit-c3>
  <bit-c3-data>
    <bit-c3-data-column key="Last Week" value:from="*previousWeek.totals" />
    <bit-c3-data-column key="This Week" value:from="*currentWeek.totals" />
  </bit-c3-data>
</bit-c3>

This code demonstrates:

  1. An element that can load data
  2. Composable widget elements (a graph with a line-series)

If our designer wanted to add another period, all they would need to do is add another <order-model> and <bit-c3-data-column> element.

Here’s a working version of the same example in a JS Bin:

Custom HTML Elements on jsbin.com

Just like HTML’s natural advantages, composing entire applications from HTML building blocks allows for powerful and easy expression of dynamic behavior.

How it works

First, it's important to understand the background of custom elements and their advantages. Then, we'll discuss the details of creating powerful custom elements in specifically DoneJS, and why they're special.

Benefits of custom elements

Before custom HTML elements existed, to add a datepicker to your page, you would:

  1. Load a datepicker script
  2. Add a placeholder HTML element
<div class='datepicker' />
  1. Add JavaScript code to instantiate your datepicker
$('.datepicker').datepicker()
  1. Gather your stone tipped spears and forage for small animals to feed your family for the night.

With custom HTML elements, to add the same datepicker, you would:

  1. Load a datepicker script
  2. Add the datepicker to your HTML or template:
<datepicker value:bind="date"/>

That might seem like a subtle difference, but it is actually a major step forward. The custom HTML element syntax allows for instantiation, configuration, and location, all happening at the same time.

Custom HTML elements are another name for Web Components, a browser spec that has yet to be implemented across browsers.

Benefits of DoneJS custom elements

DoneJS uses CanJS' can-component to provide a modern take on web components.

Components in DoneJS have three basic building blocks:

  • a template
  • a viewModel object
  • event handlers

There are several unique benefits to DoneJS custom elements:

Defining a custom element

One way to define a component is using a web component style declaration, using a single file with a .component extension:

<can-component tag="hello-world">
    <style type="less">
        i {
            color: red;
        }
    </style>
    <view>
        {{#if visible}}<b>{{message}}</b>{{else}}<i>Click me</i>{{/if}}
    </view>
    <script type="view-model">
        import DefineMap from "can-define/map/map";

        export default DefineMap.extend({
            visible: { default: true },
            message: { default: "Hello There!" }
        });
    </script>
    <script type="events">
        export default {
            click: function(){
                this.viewModel.visible = !this.viewModel.visible;
            }
        };
    </script>
</can-component>

This simple form of custom elements is great for quick, small widgets, since everything is contained in one place.

Another way to organize a custom element is a modlet style file structure: a folder with the element broken into several independent pieces. In this pattern, the custom element's ViewModel, styles, template, event handlers, demo page, tests, and test page are all located in separate files. This type of custom element is well suited for export and reuse.

DoneJS Generators will create both of these types of custom elements so you can get started quickly.

Data elements + visual elements = expressive templates

The beauty and power of custom HTML elements are most apparent when visual widgets (like graphing) is combined with elements that express data.

Back to our original example:

<order-model get-list="{previous_week}" value:to="*previousWeek" />
<order-model get-list="{current_week}" value:to="*currentWeek" />

<bit-graph title="Week over week">
  <bit-series data:from="{../previousWeekData}" />
  <bit-series data:from="{../currentWeekData}" color="Blue"/>
</bit-graph>

This template combines a request for data with an element that expresses it. It's immediately obvious how you would add or remove features from this, allowing for quick changes and easy prototyping. Without custom elements, the same changes would require more difficult code changes and wiring those changes up with widget elements that display the data.

Data custom elements are part of DoneJS via can-connect's can-tag feature.

Custom element libraries

Custom elements are designed to be easily shareable across your organization. DoneJS provides support for simple npm import and export and creating documentation for elements. Together with custom element support, these features make it easier than ever to create reusable bits of functionality and share them.

Some open source examples of DoneJS custom elements:

bit-c3 bit-tabs bit-autocomplete

Check out their source for good examples of shareable, documented, and tested custom elements.

In-template dependency declarations

can-import is a powerful feature that allows templates to be entirely self-sufficient. You can load custom elements, helpers, and other modules straight from a template file like:

<can-import from="components/my_tabs"/>
<can-import from="helpers/prettyDate"/>
<my-tabs>
  <my-panel title="{{prettyDate start}}">...</my-panel>
  <my-panel title="{{prettyDate end}}">...</my-panel>
</my-tabs>

The <can-import> element also plays a key role in Progressive Loading. Simply by wrapping a section in a closed can-import, it signals to the build that the enclosed section's dependencies should be progressively loaded.

{{#eq location 'home'}}
<can-import from="components/home">
  {{#if isResolved}}
  <my-home/>
  {{/if}}
</can-import>
{{/eq}}
{{#eq location 'away'}}
<can-import from="components/chat">
  {{#if isResolved}}
  <my-chat/>
  {{/if}}
</can-import>
{{/eq}}

View the Documentation View the Guide

Custom HTML elements are a feature of CanJS

MVVM Architecture

DoneJS applications employ a Model-View-ViewModel architecture pattern, provided by CanJS.

MVVM Architecture Diagram

The introduction of a strong ViewModel has some key advantages for maintaining large applications:

  • Decouples the presentation from its business logic - A ViewModel is essentially an object and methods representing the state of a View. This separation of concerns enables simple, dumb HTML-based Views containing minimal logic, while the ViewModel manages the complexities of application logic.
  • Enables designer/developer cooperation - Because the view is stripped of code and application logic, designers can safely and comfortably change the View without fear of breaking things.
  • Enables easier testing - ViewModels can be unit tested easily. Because they represent the view's state without any knowledge of the DOM, they provide a simple interface for testing.

How it works

The following video introduces MVVM in DoneJS, focusing on the strength of the ViewModel with an example.

DoneJS has a uniquely strong ViewModel layer compared to other frameworks. We'll discuss how it works and compares it to other frameworks.

MVVM overview

Models in DoneJS are responsible for loading data from the server. They can be reused across ViewModels. They often perform data validation and sanitization logic. Their main function is to represent data sent back from a server. Models use an intelligent set logic that enables real-time integration and caching techniques.

Views in DoneJS are templates. Specifically, templates that use handlebars syntax, but with data bindings and rewritten for better performance. Handlebars templates are designed to be logic-less.

ViewModels will be covered in detail below.

Independent ViewModels

The first reason DoneJS ViewModels are unique is their independence. ViewModels and Views are completely decoupled, and can be developed completely isolated from a template.

For example, here's a typical ViewModel, which is often defined in its own separate file like viewmodel.js and exported as its own module:

export const ViewModel = DefineMap.extend({
  get fullName() {
    return this.first + " " + this.last;
  }
})

The template (view) lives in its own file, so a designer could easily modify it without touching any JavaScript. This template renders the ViewModel property from above:

<div>{{fullName}}</div>

A custom HTML element, also known as a component, would be used to tie these layers together:

import Component from 'can-component';
import ViewModel from "./viewmodel";
import view from './view.stache';

Component.extend({
  tag: 'my-component',
  viewModel,
  view
});

The ViewModel is defined as its own module and exported as an ES6 module, so it can be imported into a unit test, instantiated, and tested in isolation from the DOM:

import ViewModel from "./viewmodel";

QUnit.test('fullName works', function() {
  var vm = new ViewModel();
  vm.first = John;
  vm.last = Doe;
  QUnit.equal(vm.fullName, 'John Doe');
});

In other frameworks, ViewModels don't enjoy this level of independence. Every React class has a render function, which is essentially a template, so the View, ViewModel, and component definition are typically part of the same module. Every Angular directive is a ViewModel. In DoneJS, separating the ViewModel, template, and custom element is encouraged, making each module more decoupled and easier to unit test.

Powerful observable data layer

A powerful observable data layer binds the layers together with very minimal code.

DoneJS supports the following features:

  1. Direct observable objects - changes to a property in an object or array immediately and synchronously notify any event listeners.

  2. Computed properties - ViewModels can define properties that depend on other properties, and they'll automatically recompute only when their dependent properties change.

  3. Data bound templates - templates bind to property changes and update the DOM as needed.

In the simple ViewModel example above, fullName's value depends on first and last. If something in the application changes first, fullName will recompute.

export const ViewModel = DefineMap.extend({
    get fullName() {
      return this.first + " " + this.last;
    }
})

fullName is data bound to the view that renders it:

<div>{{fullName}}</div>

If first is changed:

viewModel.first = Jane;

fullName recomputes, then the DOM automatically changes to reflect the new value.

The interplay of these layers provides amazing power to developers. ViewModels express complex relationships between data, without regard to its display. Views express properties from the ViewModel, without regard to how the properties are computed. The app then comes alive with rich functionality.

Without automatic ties connecting these layers, achieving the same fullName functionality would require more code explicitly performing these steps. There would need to be communication between layers, removing the isolation achieved above. Any change to first would need to notify ViewModel's fullName of a change. Any change to fullName would need to tell the view to re-render itself. These dependencies grow and quickly lead to unmaintainable code.

In Angular, there are no direct observables. It uses dirty checking with regular JavaScript objects, which means at the end of the current $digest cycle, it will run an algorithm that determines what data has changed. This has performance drawbacks, as well as making it harder to write simple unit tests.

In React, there is no observable data layer. You could define a fullName like we showed above, but it would be recomputed every time render is called, whether or not it has changed. Though it's possible to isolate and unit test its ViewModel, it's not quite set up to make this easy.

More information

To learn more:

The MVVM architecture in DoneJS is provided by CanJS.

Hot Module Swapping

Getting and staying in flow is critical while writing complex apps. In DoneJS, whenever you change JavaScript, CSS, or a template file, the change is automatically reflected in your browser, without a browser refresh.

How it works

Live reload servers generally watch for file changes and force your browser window to refresh. DoneJS doesn’t refresh the page, it re-imports modules that are marked as dirty, in real-time.

The correct terminology is actually hot swapping, not live reload. Regardless of what it's called, the result is a blazing fast development experience.

There is no configuration needed to enable this feature. Just start the dev server and begin:

donejs develop
You spend less time waiting for refreshes and builds,

View the Documentation

Live reload is a feature of StealJS.

Generators

DoneJS generators help you kickstart new projects and components. They'll save you time, eliminating boilerplate by scaffolding a working project, component, or module.

Generator templates set up many of the best practices and features discussed in the rest of this page, without you even realizing it.

How it works

The DoneJS generator uses Yeoman to bootstrap your application, component, or model.

There are four generators by default (and you can easily create your own).

Project generator

From the command line, run:

donejs add app

You'll be prompted for a project name, source folder, and other setup information. DoneJS' project dependencies will be installed, like StealJS and CanJS. In the folder that was created, you'll see:

├── .yo-rc.json
├── build.js
├── development.html
├── package.json
├── production.html
├── readme.md
├── test.html
├── src/
|   ├── app.js
|   ├── index.stache
|   ├── models/
|   |   ├── fixtures
|   |   |   ├── fixtures.js
|   |   ├── test.js
|   ├── styles.less
|   ├── test.js
├── node_modules/

You're now a command away from running application wide tests, generating documentation, and running a build. Start your server with donejs develop, open your browser, and you'll see a functioning, server-side rendered hello world page.

Modlet component generator

To create a component organized with the modlet file organization pattern:

donejs add component <folder-path> <component-name>

It will create the following files:

restaurant/
├── list/
|   ├── list.html
|   ├── list.js
|   ├── list.less
|   ├── list.md
|   ├── list.stache
|   ├── list_test.js
|   ├── test.html

This folder contains everything a properly maintained component needs: a working demo page, a basic test, and documentation placeholder markdown file.

Simple component generator

For simple, standalone components:

donejs add component <file-name>.component <component-name>

Which will generate a working component in a single file.

Model generator

To create a new model:

donejs add supermodel <model-name>

This will create:

  • a working model in the application's models folder
  • a working fixture file for that model
  • a working test, and add the test as a dependency for the application's model test
You spend less time setting up your app,

View the Documentation View the Guide

Generators are provided by the Generator DoneJS project with additional support via the donejs-cli project

Help us improve DoneJS by taking our community survey