Saturday, June 30, 2012

Datomic: An Initial Comparative Analysis


/*---[ Day of Datomic: Initial Thoughts ]---*/

Yesterday I attended a "Day of Datomic" training course, taught by Stuart Halloway. It was well taught and had quite a bit of lively Q&A and discussion. If you have the opportunity to take the Day-of-Datomic course I highly recommend it.

At the bar at the end of class, one of my fellow students and I were discussing Rich Hickey and he said "Rich has clarity". Rich has thought long, deep and critically enough to see through to essence of key issues in modern computer science, bypassing things that are so ingrained as to be a familiarity blindness in the rest of us. Datomic is the latest brain-child of Rich Hickey and was co-created by him, Stu and others at ThinkRelevance, here in Durham, where I live.

To be clear, I have not yet attained "clarity" on Datomic. The training course was an excellent deep dive that lays the groundwork for you to climb the hill of understanding, but that hill still lies ahead of me. As Fogus said, this is something very new, at least to most of us who are used to thinking in terms of relational algebra, SQL, and structured tables. Even for those of us now comfortable with document databases (Mongo and Couch) and key-value stores, this is still different in many important ways.

I won't have time to dive deeper into Datomic for a while (too many other things on my to-do/to-research list), but I wanted to capture some key notes and thoughts that I can come back to later and may help others when starting out with Datomic.

I am not going to try to give an overview of Datomic. There are lots of good resources on the web now, starting with the Datomic web site.

I reserve the right to retract any thoughts I lay out here, as I have not yet done my hammock time on Datomic.


/*---[ Compare it to what you know ]---*/

When you get something new, you want to start by comparing it to what you know. I think that's useful for finding the boundaries of what it's not and tuning the radio dial to find the right way to think about it. There is enough new here that comparisons alone are not sufficient to get a handle on Datomic, but it's one helpful place to start.

Here is a partial list of things that could be useful to compare Datomic to:

  1. Git
  2. Document dbs (MongoDB and CouchDB)
  3. Graph dbs (Neo4j)
  4. RDF Triple Stores
  5. Event Sourcing
  6. VoltDB (and other "NewSQL" databases)


/*---[ Datomic v. Git ]---*/

I've seen the "Datomic is like Git" analogy come up a couple of times now - first in the Relevance podcast with Stu, once while describing Datomic to a work colleague and again yesterday in the class. This seems to resonate with many so use it as a starting point if it helps you, but the analogy falls short in a number of areas.

In fact, the analogy is useful really only in one way - for the most part we aren't used to thinking in terms of immutability. Stu emphasized this in the course - immutability changes everything. Rich Hickey, in his Clojure West talk on Datomic, said "you cannot correctly represent change without immutability ... and that needs to be recognized in our architectures".

Git has given us this idea that I have immutable trees (DAGs actually) of data in my history, wrapped in a SHA1 identifier that guarantees that tree has not changed since you committed it. These trees of history could have been created through sequential commits or a variety of branches and merges. Regardless, I can always go back to that point in time.

That idea does resonate with Datomic's design, but after that the analogy falls apart. Datomic doesn't have multiple branches of history - there is only one sequential trail of history. Datomic is not meant to be a source code repository. Another difference is that Datomic is intended to deal with much larger volumes of data than git. Git does not separate storage from query or transactional commits, as Datomic does, because it doesn't need to. You should never have a source code repo that is so big that you need to put it on a distributed storage service like AWS EC2.


/*---[ Datomic v. Document data stores ]---*/

"Schema" in Datomic is the thing I struggled with the most in the class. I think it will be important carefully and systematically spell out what is (and isn't) a schema in Datomic.

In my view, the Datomic schema model is non-obvious to a newcomer. On the one hand I got the impression from the earlier talks I'd seen on Datomic that Datomic is just a "database of facts". These facts, called "datoms", do have a structure: Entity, Attribute, Value and Time/Transaction (E/A/V/T for short). But "Datomic is a database of facts" sounds superficially like a set of records that have no other structure imposed on them.

On the other hand, my naive assumption coming into the training was that Datomic is actually quite close to the Mongo/Couch model. When written out, datomic records look like constrained JSON - just vectors of E/A/V/T entries and some of those can be references to other JSON-looking E/A/V/T facts, which seems like a way to link documents in a document datastore.

It turns out that neither of those presuppositions are right. Datoms are not JSON documents. Stu made the comment that of all the databases available out there, the Mongo/document-store model is probably the most different from the Datomic model (see Figure 1 below). That was actually quite surprising to me.

I have to go off and think about the differences of these models some more. It will take a bit of tuning of the radio dial back and forth to understand what the Datomic schema is.


/*---[ Datomic v. Graph databases (Neo4j) ]---*/

One area not covered much in the day-of-datomic training was a comparison of Datomic to graph databases (Neo4j in particular).

A significant part of the object-relational impedance mismatch is the translation of relational rectangles to object graphs. When people come to a graph database they get really excited about how this impedance falls away. Walking hierarchies and graphs seems beautifully natural and simple in a graph database, compared to the mind-bending experience of doing that with SQL (or extensions to SQL).

So can Datomic function as a graph database? I don't know. Maybe. This type of db was not on Stu's slides (see figure 1 below). When setting up attributes on an entity, you have to specify the type of an attribute. One type of attribute is a ref -- a reference to another entity. By setting up those refs in the schema of attributes, is that basically the same as setting up a Relationship between Nodes in a graph db? Or does Neo4j do things that Datomic is either not suited for or not as performant as?


[Update: 02-July-2012]: Stu Halloway sent me an email that gave some feedback on this question. First, he pointed me to an article by Davy Suvee in which he gives a nice set of basic examples of how Datomic keeps a graph of relations that can be queried using Datalog.

Stu pointed out that one difference between Datomic and a graph database like Neo4j is that the latter "reifies edges". In graph theory you have two basic concepts: the Node (Vertex) and the Relationship (Edge) between two Nodes. Both are first class (reified) entities of the graph model - they can have identities and attributes of their own.

Datomic does not make the Relationship a thing unto itself - it is just the "ref" between two entities (nodes). In Stu's words: "We believe many of the common business uses for reified edges are better handled by reified transactions". More food for thought! :-)

That being said, you can wrapper Datomic into a full blown graph API and that is just what Davy Suvee has done. He implemented Datomic as a backend to the Blueprints Graph API and data model, which is pretty good evidence that Datomic can function as a graph database for at least some important subset of use cases.



/*---[ Datomic v. RDF Triple Stores ]---*/

Interestingly, the part of the NoSQL world that seems to get the least attention (at least in the circles I mingle in, including sifting through Hacker News with some regularity) is the database whose data model comes closest to Datomic's - RDF Triple Stores.

Rich, in his Clojure West talk, says that RDF triple stores almost got it right. Triple stores use subject-predicate-object to represent facts. This maps to Datomic's entity-attribute-value concept. The piece missing is time, which Datomic models as, or wraps into, transactions. Transactions in Datomic are first-class creatures. They encapsulate timestamp, the order of history and can have additional attributes added to them (such as what user id is associated with this transaction).

Frankly, I don't know a lot about RDF triple stores, so I'm just going to include this slide from Stu's training session for your consideration:

Fig. 1: Comparison of Datomic to other databases using Agility as a metric. (From Stuart Halloway's training slides)


/*---[ Datomic v. Event Sourcing ]---*/

Event sourcing is an intriguing and compelling idea that changes to state should be recorded as a sequence of events. Those events are immutable and often written to an append-only log of "facts". When you do event sourcing with a relational database, you have to decide how to log your facts and then what to represent in the relational schema. Will you keep only "current state"? Will you keep all of history? Will you set up a "write-only" or "write-predominant" database where up-to-date state is kept and then read-only databases that somehow get populated from this main datastore?

There are lots of ways to set it up and thus lots of decisions to make in your architecture. Read Fowler's write up on it if you are new to the idea. The newness of this idea is a something of a risk when trying to figure out the best way to implement it.

As I understand it so far, Datomic is an implementation of an event sourcing model that also answers for you many of these ancillary questions about how to implement it, such as: where and in what format will you write the transaction log? How will you structure your query tables? Will you replicate those tables? Will you project those tables into multiple formats for different types of queries and analytics? Will transactions will be first-class entities?

Datomic's answers to those questions include:

  1. There is a component of the system called a transactor and it writes immutable transactions in a variant of "append-only", since it writes transactional history to trees. As I'll talk about in the next section all transactions are handled in serialized (single-threaded) mode.
  2. You structure your data into E/A/V/T facts using the Datomic schema model constraints. Queries are done on the Peers (app tier) and those Peers have read-only datasets.
  3. You don't need to make multiple projections of your data. Datomic already gives you four "projections" (indexes) out of the box - one index by transaction/time and three other indexes:
    1. An EAVT index, meaning an indexed ordered first by entity (id), then by attributes, then by values and finally by transaction
      • This is roughly equivalent to row-level index access in a relational system
    2. An AEVT index order, which is the analog of a column store in an analytics database
    3. A VEAT index order, for querying by known values first
    4. Lastly, you can also specify that Datomic keep an AVET index, which is useful for range queries on attribute values. It is not turned on by default, since it signifcantly slows down transactional write throughput
  4. Transactions are first-class and have their own attributes. One can query the database based on transactions/time and you can go back to a point in history and see what the state of the database was at that time.

I think one of the most exciting things about Datomic is that it gives you an event-sourced system with many of the design decisions already made for you.


/*---[ Datomic v. VoltDB ]---*/

I did a short write-up on VoltDB and the "new SQL" space a few months ago.

Like VoltDB, Datomic posits that we do not have to give up ACID transactions in order to improve scalability over traditional relational models. And VoltDB and Datomic share one key core premise:

all transactional writes should be serialized by using a single writer.

In VoltDB, there is one thread in one process on one server that handles all transactions. In Datomic, there is one transactor on one machine (either in its own thread or process, not sure which) that handles all transactions. Transactions, unlike most NoSQL systems like MongoDB, can involve any number of inserts/updates/deletes (VoltDB) or additions/retractions (Datomic).

Another similarity is that both VoltDB and Datomic allow you to add embedded functions or stored procedures to the transactor that will run inside the transaction. In both cases, you add or compile the embedded transaction functions to the transactor itself. Those functions do not run in the app tier.

Michael Stonebraker, key designer of VoltDB, based this design on the fact that traditional relational databases spend on the order of 80% of their cycles doing non-productive work: managing locks and latches, commit logs and buffer caches. By moving to a single-threaded model, the overhead of write contention goes away. VoltDB claims their transactional write throughput is on the order of 40x to 50x faster than traditional relational databases.

Where Datomic and VoltDB part ways is that VoltDB retains SQL and a fully relational model, using an in-memory database that is shared across many nodes in a cluster.

Another key difference is that VoltDB really focuses on the OLTP side, not on analytics. Datomic can target both areas.


/*---[ Datomic v. Stonebraker's "10 Rules" ]---*/

In the summer of 2011, Michael Stonebraker and Rick Cattell published an ACM article called "10 Rules for Scalable Performance in ‘Simple Operation’ Datastores". By "Simple Operation Datastore" they mean databases that are not intended to be data warehouses, particularly column store reporting databases. So traditional relational databases and most NoSQL databases, including Datomic, are "Simple Operation" by this definition.

How does Datomic fare against their 10 rules? As I said, I'm still too new to Datomic to give a definitive answer, but here's my first pass at assessing Datomic against them. I'd invite others to chime in with their own analysis of this. These axes will provide another way for us to analyze Datomic.


Rule 1: Look for Shared Nothing Scalability

A shared-nothing architecture is a distributed or clustered environment in which each node shares neither main memory nor disk. Instead, each node has its own private memory, disks and IO devices independent of other nodes, allowing it to be self-contained and possibly self-sufficient, depending on the usage conditions. Typically in a shared nothing environment the data is sharded or distributed across nodes in some fashion.

An alternative architecture is shared disk - a disk that is accessible from all cluster nodes. In a shared disk environment, write contention must be managed by the disk or database server.

So is Datomic a shared nothing architecture? One could argue that the Peers are not entirely self-sufficient, as they rely on the storage service to pull data. And the transactor is not self-sufficient as it has no storage of its own - it depends on the storage service.

However, I think Datomic is shared nothing in that there will never be write contention among nodes - there is only one (active) transactor.

I also don't quite know how the storage service is architected with Datomic. If it appears as a monolithic service to the Peers on the app tier, then if it goes down, none of the Peers can function unless they have cached all the data they need.


Rule 2: High-level languages are good and need not hurt performance.

This is a direct shot at the fact that many NoSQL databases have had to reinvent APIs for querying data and many have given us low-level programming APIS, not DSLs or high level languages. (To be fair, some have - Cassandra has CQL and Neo4j has the excellent Cypher language.)

Datomic fares very well on this one - it gives us Datalog, which Stu pointed out in the training is mathematically sound like the relational algebra and is transformable into relational algebra.


Rule 3. Plan to carefully leverage main memory databases.

Datomic allows you spin up in memory arbitrary amounts of data from the datastore. The amount is limited by (at least) two things:

  1. how much memory you can put on the app/Peer machine

  2. how much memory you can put into a JVM without hittings a size where GC pauses interfere with the application

On the gc-end, you have a couple of options to ramp up. First, Datomic has a two-tier cache on the Peer - one in main memory (on-heap) and one that is "compressed" or treated as a byte array, which could be on or off-heap, but in either case puts less pressure on the garbage collector.

Second, if you want really big JVMs with on-heap memory and no GC pauses, then you can levearge Azul's JVM, though it will cost you extra.

Datomic scores a reasonably high score here, but it is not architected to be a sharded in-memory database. However, with a very fast network and SSDs on AWS, Datomic's performance may not suffer significantly. We await some good case studies on this.


Rule 4. High availability and automatic recovery are essential for SO scalability.

From the Stonebraker paper:

Any DBMS acquired for SO applications should have built-in high availability, supporting nonstop operation.

With an AWS deployment, you inherit any weaknesses of AWS model (e.g., a few prominent outages recently), but overall it's a highly redudant and highly available storage service. With Datomic you can have any number of Peers on the app tier, so replicate those as needed.

You are not required to use AWS. Currently Datomic can also be deployed on relational systems and they hope to be able use Riak in the future.

The single point of failure would appear to be the single transactor, but Datomic will provide a failover mode to an online back transactor. We'll need to learn if this failover will be provided as an automatic service or requires configuration/set-up by the user/administrator.


Rule 5. Online everything.

This overlaps rule 4 in terms of always being up, but adds to it the idea that schema changes, index changes, reprovisioning and software upgrades should all be possible without interruption of service.

I don't have hard data here from the Datomic guys, but my guess is that Datomic will pass this test with flying colors. Most indexes are already turned on by default and I suspect turning on the other one (see above) can be done without database downtime.

Schema changes ... hmm. Stu said in the training course that attributes cannot (currently) be removed from the schema, but you can add new ones. Would this require rebuilding the indexes? I doubt it, but I'd need more information on this one.

For reprovisioning and software upgrades, it is easy to envision how to add more storage and Peers since you have redundancy built in from the start. For the transactor, reprovisioning would not mean spinning up more nodes to process transactions, it would be mean getting a bigger/faster box. I suspect you could reprovision offline while your existing transactors are in motion, then swap to the new improved system in real-time using the failover procedure.


Rule 6: Avoid multi-node operations.

From the Stonebraker paper:

For achieving SO scalability over a cluster of servers .... Applications rarely perform operations spanning more than one server or shard.

Datomic does not share data sets. The only operations that need to be performed over more than one server is when a Peer needs to access data that has not yet been cached on the app tier. It then makes a network lookup to pull in those data. This is certainly slower than a complete main memory database, but Stu said that AWS lookups are very fast - on the order of 1ms.


Rule 7: Don’t try to build ACID consistency yourself.

Unlike most NoSQL datastores, Datomic has not abandoned ACID transactions. It handles transactions of arbitrary complexity in a beautiful, fast serialized model. A+ on this one.


Rule 8: Look for administrative simplicity.

From the Stonebraker paper:

One of our favorite complaints about relational DBMSs is their poor out-of-the-box behavior. Most products include many tuning knobs that allow adjustment of DBMS behavior; moreover, our experience is that a DBA skilled in a particular vendor’s product, can make it go a factor of two or more faster than one unskilled in the given product.

As such, it is a daunting task to bring in a new DBMS, especially one distributed over many nodes; it requires installation, schema construction, application design, data distribution, tuning, and monitoring. Even getting a high-performance version of TPC-C running on a new engine takes weeks, though code and schema are readily available.

I don't have a lot data one way or the other on this for Datomic. I doubt there will be a lot of knobs to turn, however. And if you are deploying to AWS, there is a Datomic AMI, so deployment and set up of the storage side should be quite simple.


Rule 9: Pay attention to node performance.

Not every performance problem can or should be solved with linear scalability. Make sure your database can have excellent performance on individual nodes.

This one is a no-brainer: A+ for Datomic. You can run all pieces on the same box, entirely on different boxes, or mix and match as desired to put the faster systems in place that you need for each piece. You can also put your high intensity analytics queries on one box and your run-of-the-mill queries on another and they will not interfere. Performance of the system will be entirely dependant on individual node performance (and data lookups back into your data store).


Rule 10. Open source gives you more control over your future.

Datomic is a commercial product and is not currently open source or redistributable, as I understand it.

From the Stonebraker paper:

The landscape is littered with situations where a company acquired a vendor’s product, only to face expensive upgrades in the following years, large maintenance bills for often-inferior technical support, and the inability to avoid these fees because the cost of switching to a different product would require extensive recoding. The best way to avoid “vendor malpractice” is to use an open source product.

VoltDB, as an example, is open source and seeks commercial viability by using the support model that has worked well for Red Hat. Datomic is still young (less than 6 months old), so we will see how its pricing and support model evolve.


/*---[ Coda ]---*/

Remember that Datomic is more than the union of all these comparisons, even assuming I got all my facts right. Datomic not only combines the best of many existing solutions, but brings new stuff to the table. Like anything of significant intellectual merit, it will take a investment of your time to learn it, understand it and for us, as a community of users, to figure out best practices with it.

Datomic is interesting and fascinating from an intellectual and architectural point of view, but it also eminently practical and ready to use. I look forward to following its progress over the coming months and years.

Lastly, if it is of any use to anyone, here are my raw notes from the day-of-datomic training.

Sunday, June 24, 2012

Setting up Emacs 24 on Ubuntu to use Swank-Clojure

I've tried a couple of times now to get Clojure with Swank/Slime working in order to use the "clojure-jack-in" command to start the Swank Clojure server. My attempt a few months ago with emacs23 on Xubuntu 11.10 failed. My recent attempt with emacs24 on Fedora 17 failed (see angry tweet).

I finally had success, so I thought I'd do a brief write up on what I did.

First, I did this on my main machine: Xubuntu 11.10. Canonical still does not have an official emacs24 bundle. I decided not to install the emacs24 package from the Cassou PPA, as I've heard others have had problems with it and I didn't know if I would have to uninstall emacs 23 to use it. I toyed with using Nix, but didn't want to make things even more complicated since I've never tried Nix.

I intend to switch over to the Canonical emacs24 package once they have it, so I left my emacs 23 package intact and built emacs 24 from source and installed it into a local directory, rather than the standard global one.


/* ---[ Installing emacs 24 ]--- */

I downloaded the source from http://alpha.gnu.org/gnu/emacs/pretest/.

I installed some required packages in order to compile emacs:

$ sudo apt-get install xorg-dev
$ sudo apt-get install libjpeg-dev libpng-dev libgif-dev libtiff-dev libncurses-dev

I unpacked the source tarball, specified a local install dir, compiled and installed:

$ tar xvfz emacs-24.1-rc.tar.gz
$ cd emacs-24.1
$ ./configure prefix=/home/midpeter444/apps/emacs24
$ make
$ make install  # no sudo required, as it installs 'locally'

Then I reset the global links to emacs

$ ls -l /usr/bin/emacs
lrwxrwxrwx 1 root root 23 /usr/bin/emacs -> /etc/alternatives/emacs
$ ls -l /etc/alternatives/emacs
lrwxrwxrwx 1 root root 18 /etc/alternatives/emacs -> /usr/bin/emacs23-x
$ rm /etc/alternatives/emacs
$ ln -s /home/midpeter444/apps/emacs24/bin/emacs /etc/alternatives/emacs

Next I moved my .emacs.d (from emacs23) and made it into a symlink:

$ mv ~/.emacs.d ~/.emacs23.d 
$ mkdir ~/.emacs24.d 
$ ln -s ~/.emacs24.d ~/.emacs.d

I copied over my init.el and macros.el files into ~/.emacs24.d, started emacs and the installed a bunch of packages via package.el. In order to use the marmalade repository (in addition to GNUs more limited package repo), I added this to my init.el:

(require 'package)
(package-initialize)
(add-to-list 'package-archives
             '("marmalade" . "http://marmalade-repo.org/packages/") t)

Then I restarted emacs and installed a bunch of packages:

> M-x package-list-packages

Here's what I chose and it installed into my ~/.emacs.d/elpa directory:

~/.emacs.d$ ls elpa/
archives                         key-chord-0.5.20080915
clojure-mode-1.11.5              markdown-mode-1.8.1
clojurescript-mode-0.5           paredit-22
clojure-test-mode-1.6.0          php-mode-1.5.0
coffee-mode-0.3.0                scala-mode-0.0.2
color-theme-6.5.5                thumb-through-0.3      
color-theme-vim-insert-mode-0.1  tidy-2.12              
feature-mode-0.4                 windsize-0.1           
find-things-fast-20111123        yaml-mode-0.0.7        
groovy-mode-20110609             yasnippet-0.6.1        
guru-mode-0.1                    yasnippet-bundle-0.6.1 
haml-mode-3.0.14                 zen-and-art-theme-1.0.1
js2-mode-20090814                zenburn-theme-1.5
json-1.2

Notice that I did NOT install slime or swank - only clojure-mode. (That may have been my problem previous times I tried this.)

I had to move a few other things over from my .emacs23.d that were not in the package.el directory.

If you are new to using package.el, this blog post helped me: http://batsov.com/articles/2012/02/19/package-management-in-emacs-the-good-the-bad-and-the-ugly/

If you are interested in my emacs setup, I have it on GitHub: https://github.com/midpeter444/emacs-setup


/* ---[ Swank Clojure and clojure-jack-in ]--- */

At this point, after a few minor tweaks to my init.el, my emacs is back in working order and looking good, all nicely upgraded to v24. Now time to tackle this Swank, Slime, clojure-jack-in mystery.

Following the instructions at the swank-clojure project, I cd'd into one of my existing leiningen projects and added [lein-swank "1.4.4"] to the :plugins section of the project.clj file, so it looks like this:

(defproject learn-congomongo "1.0.0-SNAPSHOT"
  :description "Learn the CongoMongo Clojure MongoDB driver"
  :plugins      [[lein-swank "1.4.4"]]
  :dependencies [[org.clojure/clojure "1.4.0"]
                 [congomongo "0.1.9"]])

Then the magic moment:

> M-x clojure-jack-in

After an excruciatingly long wait with bated breath, it downloaded slime and swank and it loaded the REPL successfully. Success!

Now I just have to learn how to use it and all those fancy keystrokes it allows.


[22-Aug-2012 Update]: Phil Hagelberg tweeted that Swank Clojure is now deprecated in favor of nrepl.el.

I never did get Clojure Swank to work right on my system, so I dropped back to doing M-x inferior-lisp which works pretty good with Clojure 1.3, 1.4 and 1.5-alpha (the ones I've been using lately). I look forward to trying out nrepl.el.


Saturday, June 23, 2012

HTML <script> tags 101: a JavaScript refresher

Time to revisit the basics with a quiz.

What is the output of this code when run on an otherwise valid HTML page?

<script>
  console.log("DEBUG 1");
  var foo = ;  // Javascript syntax error
  console.log("DEBUG 2");
</script>
<script>
  console.log("FOO");
  console.log("BAR");
</script>

You will get a syntax error along the lines of  Uncaught SyntaxError: Unexpected token ; Beyond that what will be printed to the Javascript console?

Answer Choices

  1. All console log lines print out:

    DEBUG 1
    DEBUG 2
    FOO
    BAR

  2. Console line with DEBUG 2 fails because of the syntax error before it:

    DEBUG 1
    FOO
    BAR

  3. The first script block fails because of the syntax error and the second block is fine:

    FOO
    BAR

  4. The line above the syntax error prints, but all subsequent Javascript is hosed:

    DEBUG 1

  5. No output, since the syntax error stops all Javascript in its tracks.

Before you read any farther, follow Kent Beck's advice: stop and answer out loud.

In many large and aging JavaScript and HTML code bases, there are little syntax errors that get ignored in the heat of getting a release out the door and moving on to higher priority features and bug fixes. It passes QA so the syntax error noise gets ignored.

But sometimes those seemingly harmless little syntax errors can cause big problems. More on that in a minute.

The answer to the quiz above is choice #3.

JavaScript is not a line-by-line parsed scripting language, like a bash script, but it is a script-by-script parsed language.

If I run a bash script like this:

#!/bin/bash
echo "foo"
eho "bar"  # syntax error
echo "quux"

it will print out:

foo
./myscript.bash: line 3: eho: command not found
quux

A bash script just parses and executes each line independently of the others. Syntax errors only affect the current line.

HTML <script> tags are fully parsed, possibly compiled (depending on the context/browser) and only run if the previous steps worked. But the next <script> tag is independent of the previous in terms of whether it will run.


/* ---[ Lessons Learned ]--- */

So why is this important?

Suppose you are doing Javascript module requires. With RequireJS, you do a require like this:

<script>
  require(['customerModule', 'orderModule'], function(c, o) {

    var customers = c.getCustomers();
    // do something with customers ...

    var orders = o.getOrders();
    // do something with orders ...
  });
</script>

or with earlier versions of Dojo, you might have a series of requires of its dijit widget library:

<script>
  dojo.require("dijit.form.Button");
  dojo.require("dojox.layout.ContentPane");
</script>

What if you have a simple syntax error in the block of script code? All the requires will fail, but the page will chug along and try to execute the rest of the javascript in other script tags or files.

If you have code like this:

<script>
  require(['customerModule', 'orderModule'], function(c, o) {

    var customers = c.getCustomers();
    // do something with customers ...

    var orders = o.getOrders();
    // do something with orders ...
  });

  var foo = ;  // our syntax error again
</script>

Your app will probably fail now. So easy, just fix the obvious syntax error and assign something to foo. The tricky part is maybe that that assignment value is set server-side in ASP, JSP, or PHP code and some code path ended up having a null value, ending up in that JavaScript syntax error.

So you need defensive programming here on both sides - on the server side protect variable interpolation from delivering null or empty entries when dynamically creating Javascript code.

That could be something like this JSP/JSTL code:

<c:if test="${myval}">var foo = ${myval};</c:if>

On the Javascript side, separate your set up code (such as module requires) from "regular" executable other code as much as possible.