Let's not pretend. CRANQ is different. Especially when you're coming from a coding background, because CRANQ will defy your expectations.

Make no mistake, despite the no-code exterior, working in CRANQ is programming. Dataflow programming.

The gist of the dataflow paradigm is this: there are nodes that take inputs, do something, and send outputs; and there are connections which pass data on from outputs to inputs. That's it.

When nodes represent things from the conceptual landscape of a problem domain, dataflow doesn't even require coding skills. So much so that most no-code tools, especially in the workflow automation space are based on it.

But when you go deeper, the difficulty of tackling dataflow programming is going to vary based on what programming paradigm or style you have experience with.

Functional Programming (FP)

The most sizable overlap is perhaps with FP. Functional concepts like purity, side effect, and immutability transfer nicely to dataflow. Typical functional data manipulations like map, reduce, and filter, too, have their counterparts.

Where dataflow, and especially our flavor of it differs, is that code is assumed to be asynchronous by default because nodes have independent inputs and outputs. If the node is the dataflow analogy of the function, then it follows that the input is the analogy of the argument, and the output is the analogy of the return value, the callback, and the promise.

Imperative Programming

Anyone with a degree of coding skill starts out by writing imperative code. Your first program in JavaScript or Python or even Scratch was (or will be) imperative. For loops, while loops, ifs, variables: these are fundamental concepts that developers carry far into their careers. But as ubiquitous imperative concepts are, they don't mix well with dataflow.

Loops and conditionals are built on the assumption that you have access to a shared mutable state, ie. variables. In dataflow, there are no variables. The closest analogy would be store nodes, but the data they store is not shared. When you send a signal to read it out, all connected nodes will receive it, not just the one that 'asked' for it.

If instead of trying to find a one-to-one replacement, you look at loops and conditionals in terms of their purpose in a given context, you can find dataflow equivalents. Are you using that loop to iterate over an array? Use an iterator node. Are you using it to build a dictionary? Use a dictionary builder. Are you transforming data? Use a mapper, filter, or reducer. And so on. And the same applies to conditionals. Are you deciding between two continuations of a process? Use a fork. Are you making a decision whether or not to continue? Use a gate.

This is where the bulk of the shift in mindset lies. But fear not: we see people making this transition every day, and quite fast, actually. We even prepared some material to help you get started. And if you get stuck, we're happy to answer questions or jump on a call.

Object-Oriented Programming (OOP)

Although OOP is where most developers spend a significant portion of their careers, that experience, unfortunately, is of little help in CRANQ. The most OOP-like thing we can say about CRANQ is that there are prototypes (of nodes), and there are instances (of nodes), kind of like classes and instances. But that's where the similarities end, because node instances get created only once, when the program starts - in contrast to instances of classes which you can bring in and out of existence arbitrarily during runtime.

The reason for this can be summed up in the CRANQ ethos: code is static, data is dynamic. (The second part of this statement also works in reverse: what's dynamic, must be data.) When we look at the dataflow graph, we don't want it to change while the program is running. The whole point of visualizing the code is to not have to imagine it. Dynamic instantiation would utterly ruin this experience.

In the long list of how CRANQ is unlike OOP, I'd also highlight the lack of inheritance. Prototypes can compose other prototypes, but cannot extend them. As CRANQ is strongly typed, where data is restricted to JSON, one could argue that combining one record type with another counts as inheritance, but to avoid confusion, it's not expressed as such.

Unique to CRANQ: tagged signals

CRANQ's asynchronous-first approach to dataflow helps us side-step a whole range of issues related to concurrency that are otherwise head-scratchers in classical programming languages. Solving concurrency this way however introduces the problem of synchronizing signals that are to be processed together. Just take adding two numbers. The CRANQ node for addition has two independent inputs: "A" and "B". When should we calculate the sum? When a new value comes in through A or B? What happens before we have values for both? Should we wait until we do? The questions pile up, and we need clarity for a predictable behavior.

The secret sauce of CRANQ is that every signal breaks down to 2 components: data and tag. The tag identifies the origin of the signal. Signals from the same origin can be synchronized, (using a syncer node) so reaching back to our previous example, as long as A and B carry the same tag, (ie. can be traced back to the same origin) a sum will be calculated and sent through the output.

This is a very robust system and works quite seamlessly most of the time - but there are some special cases. For example, nodes like iterators and aggregators change the tag of the signal. They're origins of new signals, and because of that their outputs can't be synced with their own inputs.

Cheat sheet

This may be a lot to take in for a first lesson in CRANQ, so I extracted the gist into bullet points. If you look at it, there aren't that many, and I promise you'll get the hang of it in no time. Once you do, you'll be able to do more, faster. Like us CRANQers.

CRANQ-flavor dataflow

  • nodes (inputs, outputs) and connections
  • asynchronicity first
  • tagged signals
  • composable nodes
  • strongly typed
  • JSON-only
  • code is static, data is dynamic

Imperative analogies

  • variable -> store
  • loop -> iterator, builder, mapper
  • conditional -> fork, gate

FP analogies

  • function -> node
  • function call -> connection
  • argument -> input
  • return value -> output
  • callback / promise -> output

Useful FP concepts

  • pure function
  • side effect
  • immutability
  • map, reduce, filter

OOP analogies and differences

  • class -> prototype
  • code and data are separate
  • no inheritance
  • no instantiation at runtime

- It's all in the connections

Dan