Dusted off – Troposphere word cloud renderer

Troposphere is a word-cloud rendering plugin built in Javascript on Canvas using Fabric.


It has a few artistic options such as the “jumbliness”, “tightness” and “cuddling” of the words, how they scale with word-rankings, colour and brightness. It also has an in-page API to connect with UI controls.

I developed this visualisation in my spare-time a few years ago, while I was building a product codenamed “Tastemine” which was originally conceived by Emily Knox, the social media manager at Deepend Sydney. This proprietary application was designed to collect “stories” – posts, comments and likes on our client’s Facebook pages with analytical options ranging from keyword performance tracking through to sentiment analysis. After scraping and histogramming keywords from this (and other potential sources), we added the option to render the result in Troposphere as a playful way to present the data visually.

Screen Shot 2016-07-01 at 5.21.46 pmWhile ownership of a page enables you to access much more detailed data, it’s surprising how much brand identity you can collect from public stories on company pages. We used this to great effect, walking into pitches with colourful (if slightly corny) renditions of the social zeigest happing right now on their social media. Some embarrassing truths and some pleasant surprises were amongst the big friendly letters.

I think the scaling of the word frequencies is one of the most compelling aspects of word clouds. Often I’d hear clients cooing over seeing their target brand-message keywords standing out in large bold letters. Whether it truly proves success or not is questionable, but the big words had spoken. We certainly used it to visualise success trends over time, as messages we were marketing became larger in the social chatter over time.

Screen Shot 2016-07-14 at 10.52.16 pm

It was, at the time of writing, the only pure-HTML5 implementation, with visual inspiration taken from existing Java and other backend cloud generators such as Wordle which didn’t have APIs and so couldn’t be used programmatically. It was quite challenging to perfect, and still isn’t perfect at not crashing some letters together, and still takes some time to render large clouds. I made huge leaps in optimisation and accuracy during development, so it may be possible to further perfect it with improved collision detection and masking algorithms.

It was built in the very early days of HTML5 Canvas when we were all lamenting the loss of the enormous Flash toolchain. It felt like 1999 all over again programming in “raw” primitives with no hope of finding libraries for sophisticated and well-optimised kerning, tweening, animating, transforming and forget physics. These were both sad and exciting times – we were in a chasm between the death of Flash and the establishment of a proper groundswell of Javascript. At the time Fabric was one of the contenders for a humanising layer of the raw Canvas API, handling polyfills and normalisation plus a mini-framework which actually had all sorts of strange scoping quirks.

One of my dear friends, and at the time rising-star developers, Lucy Minshall was suffering more than most from the sudden demise of Flash – being a Flash developer. I chose this project as training material for her as it was a good transition example from the bad old ways of evil proprietary Adobe APIs to the brave new future following the Saints of WHATWG. It also contained some really classic programming problems, difficult maths and required a visual aesthetic – perfect for a talented designer-turned Flash developer like Lucy. Who cares what language you’re writing in anyway – its the algorithms that matter!

The most interesting and difficult part of the project was “cuddling” the words, as Lucy and I came to call it with endless mirth. This was the idea of fitting the word shapes together like tetris so they didn’t overlap. Initially I implemented a box-model where the square around each glyph couldn’t intersect with another bounding-box. That was easy! Surely it wouldn’t be so hard to swap that layout strategy for one that respected the glyph outlines?

While I can’t remember all the possibilities I tried (there were lots, utter failures, false-starts, weak algorithms and CPU toasters) a few of the techniques which stuck are still interesting.

The main layout technique (inspiration source sadly lost), was placing the words in a spiral from the centre outwards. This really helped with both the placement algorithm to get a “cloud” shape and also the visual appeal and pseudo-randomness – considering people don’t like really random things.

Another technique I borrowed from my years as a Windows C/CPP programmer in the “multimedia” days, was bit-blitting and double/triple buffering. Now this was a pleasant Canvas surprise as bitmap operations were pretty new to Flash at the time, and felt generally impossible on the web. The operations used to test whether words were overlapping involved some visually distressing artefacts with masks and colour inversions and so on, so I needed to do that stuff off-screen. Also for performance purposes I only needed to scan the intersecting bounding-boxes of the words, so copying small areas to a secondary canvas for collision detection was much more efficient. Fortunately Canvas allows you to do these kind of raster operations (browser willing) even though it’s mainly a vector-oriented API.

Producing natural looking variations in computer programs often suffers from the previously mentioned problem that true randomness and human perception of randomness are two very different things. People are so good at recognising patterns in noisy environments, that you have to purposely smooth out random clusters to avoid people having religious crises.

During this project I produced a couple of interesting random variants which I simply couldn’t find in the public domain at the time. The randomisation I developed is based around the normal distribution (bell curve) and cut-off around three standard-deviations to prevent wild outliers, instead of at the usual min-max. The problem with typical random numbers over many iterations is you get a “flat line” of equal probabilities between the min and max, like a square plateau. This isn’t normal! Say if your minimum is 5 and max is 10, over time you’ll get many 5.01 but never a single 4.99. In reality in life, everything is a normal distribution! Really you want to ask an RNG for a centre-point, and a STDEV to give the spread. I was pretty surprised (after coming up with the idea) that I couldn’t find anything, in any language to implement it. I’d been working on government-certified RNGs recently, and had even interfaced with radioactive-decay-driven RNGs in my youth,  so believed I was relatively well versed in the topic. So I reached for my old maths text books and did it myself – with some tests – and visualisations of course!

Having bell-curve weighted random numbers really helped give a soothing natural feel to the “jumbliness” of the words and to the spread of the generated colour palettes. It’s an effect that’s difficult to describe – it has to be seen (A/B tested) to be appreciated. I wonder if they are secretly used in other areas of human-computer relations.

Performance was one of the biggest, or longest challenges. In fact it never ended. I was never totally happy with how hot my laptop got on really big clouds, with all the rendering options turned on. Built into the library are some performance statistics and – you guessed it – meta-visualisation tools in the form of histograms of processing time over word size.

I also experimented with sub-pixel vs. whole-pixel rendering but didn’t find the optimisation gains some people swore by, when rounding to true pixels.

After a lot of hair pulling, there were some really fun moments when a sudden head-slap moment lead to a reworking of the collision detection algorithm (the main CPU hog) which gave us huge a leap in performance. I’m sure there’s still many optimisations to make, and I’d be happy to accept any input, hence why I’ve open sourced it after all this time.

While tag clouds may be the Mullets of the Internet, programming them almost certainly contributes to baldness.

Screen Shot 2016-07-14 at 10.55.29 pm

Posted in Miscellaneous | Leave a comment

Trialling the ELK stack from Elastic

As part of my ongoing research into big-data visualisation and infrastructure and application management tools it came to give ELK a test run to check if it’s suited to my needs. I’ve already looked at a few others (which I will detail in another article), and so far haven’t found somethings suitable for a SMB to collate and process both “live” information from applications, e.g. from existing databases or APIs, combined with “passive” information from log files.

Some of the applications I work with are modifiable so we can take advantage of APIs to push event-driven data out to analytics or monitoring platforms, but some legacy components are just too hard to upgrade thus log-scraping could be the only viable option. I already use tools at various stack levels such as New Relic (APM), Datadog/AWS (infrastructure), Google Analytics (web/user) and my special interest: custom business-event monitoring. Ideally these various sources could be combined to produce some extremely powerful emergent information, but the tools at these different levels are often specific to the needs of that level such as hardware metrics vs. user events and thus difficult to integrate.

My Experiences Trialling Elastic 

It seemed from an initial look that the Elastic stack was flexible and agnostic enough to be able to provide any of these aspects. But would it be a jack-of-all and master-of-none?

To give it a go, I looked at the three main components separately at first. Simply put:

  1. LogStash – data acquisition pipeline
  2. Elastic Search – search engine & API
  3. Kibana – visualisations dashboard

The provide hosted services but I didn’t feel like committing just yet and didn’t want to be rushed in a limited-time trial, so I downloaded and installed the servers locally. I mentally prepared myself for hours of installation and dependency hell after my experiences with Graphite and Datalab.

But – these were my only notes during the fantastically quick set-up:

  • Too easy to setup, built in Java,  it just ran on my macbook!
  • Tutorial: input from my local apache logs files -> elasticsearch, processed really quick!
  • Logstash Grok filter would be key to parsing our log files…

I just unzipped it and ran it and it worked. I know that’s how the world should work, but this is the first time I’ve experienced that for years.

Interest piqued, I decided to run through the tutorials and then move on to setting up a real-world log import scenario for a potential client. I noted down the things I discovered, hopefully they will help other people on a similar first journey. At least it will help me when I return to this later and have predictably forgotten it all.

LogStash – data acquisition pipeline

I ran through the Apache logs tutorial, after completing the basic demos.

The default index of logstash-output-elasticsearch is “logstash-%{+YYYY.MM.dd}”, this is not mentioned in the tutorials. Thus all Apache logs are indexed under this, hence the default search like http://localhost:9200/logstash-2016.07.11/_search?q=response=200

I don’t think this will be useful in reality – having an index for every day, but I guess we’ll get to that later. Furthermore the timestamp imported is today’s date, i.e. the time of the import, not the time parsed from the logs. [I will address this later, below]

Interesting initial API calls to explore:

http://localhost:9200/_cat/indices?v – all (top level) indices, v = verbose, with headers.

http://localhost:9200/_cat/health?v   – like Apache status

http://localhost:9200/_cat/ – all top level information

Grok – import parser

Grok is one of the most important filter plugins– enabling you to parse any log file format, standard or custom. So I quickly tried to write an info trace log grok filter for some legacy logs I often analyse manually and thus know very well. This would make it easier to evaluate the quality and depth of the tools – “how much more will these tools let me see?”

My first noobish attempt was an “inline” pattern in the Grok configuration. A toe in the water.

filter {
   grok {

        # example 01/03/2016 23:59:43.15 INFO: SENT: 11:59:43 PM DEVICE:123:<COMMAND>
        match => { "message" => "%{DATESTAMP:date} %{WORD:logType}: %{WORD:direction}: %{TIME} %{WORD:ampm} DEVICE:%{INT:deviceId}:%{GREEDYDATA:command}"}

I found it hard to debug at first: it wasn’t importing but I saw no errors. This was because it was working! It took me a little while to get the trial-and-error configuration debug cycle right. Tips:

  • This grok-debug tool was good.
  • This one also.
  • Core Grok patterns reference is vital
  • A regex cheat sheet also helps, as Grok is built on it.
  • Start Logstash with -v to get verbose log output, or even more with –debug
  • Restart logstash when you make config changes (duh)
  • The config is not JSON. Obvious but this kept catching me out because most other aspects you’ll need to learn simultaneously are in JSON. (what’s with the funny PHP-looking key => values, are they a thing?)

OK. I got my custom log imported fairly easily – but how to I access it?

Elastic Search – search engine & API

During my first 5 minutes going through the ES tutorials and I noted:

  • Search uses a verbose JSON DSL via POST (not so handy for quick browser hackery)
  • However I found you can do quick GET queries via mini-language
  • Search properties: http://localhost:9200/logstash-2016.07.12/_search?q=deviceId:1234&response=200&pretty
  • To return specific properties e.g.: &_source=logType,direction,command
  • Scores – this is search-engine stuff (relevance, probabilities, distances) as opposed to SQL “perfect” responses.
  • Aggregates (like GROUP) for stats, didn’t try them, POST only and I’m lazy today, but they look good. Would probably explore them more in Kibana

Great – the REST API looks really good, easily explorable with every feature I had hoped for and more. The “scores” aspect made me realise that this isn’t just a data API, this is a proper search engine too, with interesting features such as fuzziness and Levenshtein distances. I hadn’t really thought of using that – from a traditional data accuracy perspective this seemed all a bit too gooey, but perhaps there will be a niche I could use it for.

Kibana – for visualisations

  • Download “installed” tar.gz – again worked perfectly, instantly.
  • Ran on, it set itself up
  • Created default index pattern on logstash-* and instantly saw all the data from above import.

So again great, this was “too easy” to get up and running, literally within 5 minutes I was exploring the data from the Apache tutorial in wonderful technicolor.

So after a broad sweep I was feeling good about this stack so I felt it was time to go a bit deeper.


It seems the next (it should have been first!) major task is to map the fields to types, else they all end up as fieldname.raw as strings.

But… you cannot change mappings on existing data – so you must set them up first! However you can create a new index, and re-index the data… somehow, but I found it easier to start again for the moment.

I couldn’t figure out (or there isn’t) a mini-language for a GET to create mappings, so I used the example CURL commands which weren’t as annoying as I’d thought they’d be – except I do use my browser URL history as my memory. It’s just a bit harder to re-edit and re-use SSH shell histories, than in-browser URLs.

curl -XPUT http://localhost:9200/ra-info-tracelog -d '
 "mappings": {
   "log": {
     "properties": {
       "date": { "type" : "date" },
       "deviceId": { "type" : "integer" }


Getting the date format right…

The legacy server, from which these logs came, doesn’t use a strict datetime format (sigh), and  Logstash was erroring.

# example 01/03/2016 23:59:43.15

Initially I tried to write a “Custom Pattern” but then I found the Grok date property format should be able to handle it, even with 2 digits of microseconds (default is 3 digits). To figure this out, I had to manually traverse the tree of patterns in the core library from DATESTAMP down through its children. This was actually a good exercise to learn how the match patterns work – very much like an NLP grammar definition (my AI degree is all coming back to me).

So why is Grok erroring when the pattern is correct?

It took me a while to realise it’s because the Grok DATESTAMP pattern is just a regexp to parse the message data into pieces but is more permissive than the default date field mapping specification in the subsequent Elasticsearch output stage. So it tokenises the date syntactically, but it’s the field mapping which then tries to interpret it semantically which fails.

OK, so I felt I should write a custom property mapping to accommodate the legacy format.

    "date": {
        "type" : "date" ,
        "format": "dd/MM/yyyy HH:mm:ss.SS"

Mild annoyance alert: To do these changes I had to keep re-creating the indexes and changing the output spec, restarting Logstash and changing my debug query URLs. So it’s worth learning how to re-index data, or (when doing it for real) get this right first in an IA scoping stage.

Tip: I debugged this by pasting very specific single logs one line at a time into the test.log file which Logstash is monitoring. Don’t just point it a huge log file!

So many formats!

The date mappings are in yet another language/specification/format/standard called Joda. At this point I started to feel a little overwhelmed with all the different formats you need to learn to get a job done. I don’t mind learning a format, but I’m already juggling three or four new syntaxes and switching between them when I realise I need to move a filtering task to a different layer is an off-putting mix of laborious and confusing.

For example I just learned how to make optional matches in Grok patterns, but I can’t apply it here and do “HH:mm:ss.SS(S)?” to cope with log-oddities, which is a frustrating dead-end for this approach. So I have to look again at all the other layers to see how I can resolve this with a more flexible tool.

OK, once the date mapping works… it all imports successfully.

Creating Kibana visualisation

To use this time field you create a new index pattern in Kibana>Settings>Indices and select the “date” field above as the “Time-field name”, otherwise it will use the import time  as the timestamp – which won’t be right when we’re importing older logs. (It will be almost right if logs are being written and processed immediately but this won’t be accurate enough for me).

I loaded in a few hundred thousand logs, and viewed them immediately in Kibana… which looks good! There are immediately all the UI filters from my dreams, it looks like it will do everything I want.

But there’s an easier way!

The Date filter allows you to immediately parse a date from a field into the default @timestamp field. “The date filter is especially important for sorting events and for backfilling old data.” which is exactly what I’m doing.

So I made a new filter config:

filter {
    grok {
        match => { "message" => "%{DATESTAMP:date} ... “}
    date {
        match => ["date", "dd/MM/yyyy HH:mm:ss.SS"]

And it turned up like this (note: without using the field mapping above, so the redundant “date” property here is just a string). Also note the UTC conversion, which wqs a little confusing at first especially as I unwittingly chose a to test a log across the rare 29th of Feb!

    "@timestamp" : "2016-02-29T13:00:23.600Z",
    "date" : "01/03/2016 00:00:23.60",

The desired result was achieved: this showed up in Kibana instantly, without having to specify a custom time-field name.

I got it wrong initially, but at least that helped me to understand what these log-processing facilities are saving you from having to do later (possibly many times).

Making more complex patterns

The legacy log format I’m trialling has typical info/warning/error logs, but each type also has a mix of a few formats for different events. To break down these various log types, you need to implement a grammar tree of expressions in custom Grok Patterns.

The first entries should be the component “elements” such as verbose booleans or enumerations

    ONOFF (?:On|Off)

If a log file has various entry types, like sent/received, connection requests and other actions then the next entries should be matching the various log-line variants composed of those custom elements and any standard patterns from the core libraries.

# e.g. 01/02/2016 01:02:34.56 INFO: Connection request: device:1234 from ()
INFO_CONNECTION_REQUEST %{DATESTAMP:date} %{LOG_TYPE:logType}: Connection request: device:%{INT:deviceId} from %{HOSTPORT:deviceIP} \(%{HOSTNAME}\)

Then finally you have one log-line super-type which matches any log-line variants


Again the tools mentioned above were crucial in diagnosing the records which arrive in Elasticsearch tagged with _grokparsefailure while you are developing these patterns.

For annoyingly “flexible” legacy log formats I found these useful:

  • Optional element: ( exp )?          e.g. (%{HOSTNAME})?
  • Escape brackets:    \(                     e.g. \(%{INT:userId}\)
  • Non-capturing group: (?: exp ) e.g. (?:INFO|WARNING|ERROR|DEBUG)

Differentiating log variants in the resulting data

Next I wanted to be able to differentiate which log-line variant each log had actually matched. This turned out to be harder than I had thought. There doesn’t seem to be a mechanism within the regular-expression matching capabilities of the Grok patterns such as to be able to “set a constant” when a specific pattern matches.

The accepted method is to use logic in the pipeline configuration file plus the abilities to add_tags or add_fields in the Grok configuration. This approach is sadly a bit wet (not DRY) as you have to repeat the common configuration options for each variant. I tried to find other solutions, but currently I haven’t resolved the repetitions.

grok {
    match => { "message" => "%{INFO_CONNECTION_CLOSED}" }
    patterns_dir => ["mypatterns"]
    add_tag => [ "connection" ]
grok {
    match => { "message" => "%{RA_INFO_LINE}" }
    patterns_dir => ["mypatterns"]

However this can also result in a false _grokfailure tag, because the two configurations are run sequentially, regardless of a match. So if the first one matches, the second will fail.

One solution is to use logic to check the results of the match as you progress.

    grok {
        match => { "message" => "%{INFO_CONNECTION_CLOSED}" }
        patterns_dir => ["mypatterns"]
        add_tag => [ "connection" ]
    if ("connection" not in [tags]) {
        grok {
            match => { "message" => "%{INFO_LINE}" }
            patterns_dir => ["mypatterns"]

This works well, and for these log-line variants, I’m now getting a “connection” tag, which can enable API queries/Kibana to know to expect a totally different set of properties for items in the same index. I see this tag as a kind of “classname” – but I don’t know yet if I’m going down the right road with that OO thought train!

    "@timestamp" : "2016-02-29T13:00:25.520Z",
    "logType" : [ "INFO", "INFO" ],
    "deviceId" : [ "123", "123" ],
    "connection_age" : [ "980", "980" ],
    "tags" : [ "connection" ]

Another method is to “pre-parse” the message and only perform certain groks for specific patterns. But again it still feels like this is duplicating work from the patterns.

    if [message] =~ /took\s\d+/ { grok { ... } }

Even with the conditional in place above, the first filter technically fails before the second one succeeds. This means the first failure will still add a “_grokparsefailure” tag to an eventually successful import!

The final workaround is to manually remove the failure tags in all but the last filter:

    grok {
        match => { "message" => "%{INFO_CONNECTION_CLOSED}" }
        add_tag => [ "connection" ]
        # don't necessarily fail yet...
        tag_on_failure => [ ]

So while I am still very impressed with the ELK stack, I am starting to see coping with real-world complexities isn’t straightforward and is leading to some relatively hacky and unscalable techniques due to the limited configuration language. It’s these details that will sway people from one platform to another, but it’s difficult to find those sticking points until you’ve really fought with it – as Seraph so wisely put it.

Loading up some “big” data

Now I was ready to import a chunk of old logs and give it a good test run. I have a lot – a lot – of potential archive data going back years. It seemed to import fairly quickly even on my old 6yr-old MacBookPro Logstash chewed 200,000 logs into Eleasticsearch in a few minutes. (I know this is tiny data, but it’s all I had in my clipboard at the time.) I’m looking forward to testing millions of logs on a more production tuned server and benchmarking it with proper indexing and schema set up.

Heading back to Kibana, I was able to explore the data more thoroughly now it’s a bit more organised. The main process goes through:

  1. data discovery
  2. to making visualisations
  3. and then arranging them on dashboards.

This process is intuitive and exactly what you want to do. You can explore the data by building queries with the help of the GUI, or hand-craft some of it with more knowledge of the Elasticsearch API, then you can save these queries for re-use later in the visualisation tools.

Even in the default charts, I instantly saw some interesting patterns including blocks of missing data which looked like a server outage, unusual spikes of activity, and the typical camel-humps of the weekend traffic patterns. These patterns are difficult to spot in the raw logs, unless you have Cipher eyes.

I had a quick look at the custom visualisations, particularly the bar-charts, and found you can quite easily create sub-groups from various fields and I started to realise how powerful the post-processing capabilities of Kibana could be in slicing up the resulting data further.

Summary thoughts

In summary I feel the ELK stack can certainly do what I set out to achieve – getting business value out of gigabytes of old logs and current logs without having to modify legacy servers. I feel it could handle both infrastructure level monitoring and the custom business-events both stored in logs and fired from our APIs and via MQs.

The component architecture and exposed REST API is also flexible enough to be able to easily feed into other existing data-processing pipelines instead of Kibana, including my latest pet-project Logline which visualises mashups of event-driven logs from various sources using the Vis.org Timeline.

Next steps

I feel now I’m ready to present this back to the folks at the organisations I consult for and confidently offer it as a viable solution. It offers tools for building a business intelligence analysis platform and with the addition of the monitoring tools such as Watcher potentially bring that post-rational intelligence into real-time.

Beyond that – the next step could even be predicting the future, but that’s another story.

Posted in articles | Tagged , , , , , | Leave a comment

PHPUnit-Selenium2 Cheat Sheet

My PHPUnit-Selenium2 Cheat Sheet

Here are a few snippets of how I’ve achieved various tasks, some tricks and patterns in phpunit/phpunit-selenium v2.0 – targeting Selenium2. I’ll try to keep this updated with more techniques over time.


I wrote this small hook to make screenshots automatic, like they used to be. Of course you may want to put a timestamp in the file, but I usually only want the last problem.

 * PhpUnitSelenium v1 used to have automatic screenshot as a feature, in v2 you have to do it "manually".
public function onNotSuccessfulTest(Exception $e){
 file_put_contents(__DIR__.'/../../out/screenshots/screenshot1.png', $this->currentScreenshot());


Waiting for stuff

An eternal issue in automated testing is latency and timeouts. Particularly problematic in anything other than a standard onclick-pageload cycle, such as pop-up calendars or a JS app. Again I felt the move from Selenium1 to 2 made this much more clumsy, so I wrote this simple wrapper for the common wait pattern boilerplate.

 * Utility method to wait for an element to appear.
 * @param string $selector
 * @param int    $timeout milliseconds wait cap, after which you'll get an error
protected function waitFor($selector, $timeout=self::WAIT_TIMEOUT_MS){
 $this->waitUntil(function(PHPUnit_Extensions_Selenium2TestCase $testCase) use($selector){
  try {
  } catch (PHPUnit_Extensions_Selenium2TestCase_WebDriverException $e) {
   return null;
  return true;
 }, $timeout);

Checking for the right page, reliably.

If something goes wrong in a complex test with lots of interactions, it’s important to fail fast  – for example if the wrong page loads, nothing else will work very well. So I always check the page being tested is the right page. To do this reliably, not using content or design-specific elements, I add a <body> tag “id” attribute to every page (you could use body class if you’re already using that styling technique but I tend to separate my QA tagging from CSS dependencies). Then I added this assertion to my base test case.

 * We use <body id="XXX"> to identify pages reliably.
 * @param $id
protected function assertBodyIDEquals($id){
 $this->assertEquals($id, $this->byCssSelector('body')->attribute('id'));

Getting Value

The ->value() method was removed in Selenium v2.42.0. The replacement method is to use $element->attribute(‘value’) [source]

// old way
//$sCurrentStimulus = $this->byName('word_index')->value();
// new way
$sCurrentStimulus = $this->byName('word_index')->attribute('value');
// I actually use this now:
$sCurrentStimulus = $this->byCssSelector('input[name=word_index]')->attribute('value');

However ->value() was also a mutator (setter), which ->attribute() is not. So if you want to update a value, people say you have to resort to injecting JavaScript into the page, which I found somewhat distasteful. However luckily this is not the case for the “value” attribute specifically, according to the source code, it’s only the GET which was removed from ->value().

JSON Wire Protocol only supports POST to /value now. To get the value of an element GET /attribute/:naem should be used

So I can carry on doing this, presumably until the next update breaks everything.


General Page Tests

I have one test suite that just whips through a list of all known pages on a site and scans them for errors, a visual regression smoke test for really stupid errors. It’s also easy to drop a call to this method in at the beginning of any test. When I spot other visual errors occurring, I can add them to the list.

 * Looks for in-page errors.
protected function checkErrors() {
 $txt = $this->byTag('body')->text();
 $src = $this->source();

 // Removed: This false-positives on the news page.
 //$this->assertNotContains('error', $this->byTag('body')->text());

 // Standard CI errors
 $this->assertNotContains('A Database Error Occurred', $txt);
 $this->assertNotContains('404 Page Not Found', $txt);
 $this->assertNotContains('An Error Was Encountered', $txt);
 // PHP errors
 $this->assertNotContains('Fatal error: :', $txt);
 $this->assertNotContains('Parse error:', $txt);

 // the source might have hidden errors, but then it also might contain the word error? false positive?
 // This false positives in the user form (must have validation error text!
 //$this->assertNotContains('error', $this->source());
 $this->assertNotContains('xdebug-error', $src); // XDebug wrapper class



Posted in articles | Tagged , , , | Leave a comment

MOTIf v2.0 – responsive redesign

After 8 years the MOTIf website was starting to show it’s age, visually at least.

While I have performed regular technical updates to keep it browser compatible and futureproofed, we made a fixed-layout decision (rather than fluid) in 2007 and so has it never worked well on these newfangled smart phones and phablet whotnots. Sadly though, the main driver for the recent redesign was actually a need to distance ourselves from some unscrupulous people tacitly claiming the site was their own! We decided it was time to rebrand the site, and introduce the key people in the team on a new “About Us” page – the site has previously had somewhat of an air of mystery behind it, for… reasons (as the kids say nowadays).

So I thought it was time for a complete front-end rebuild, and dusted off everything I learned while working at Deepend building what were cutting-edge responsive sites (three or four years ago now). We spent huge efforts pioneering in this field, and even built our own front-end framework/reset/bootstrap.

Seeing as I’m working voluntarily on the site now, my time is a scarce resource so I decided to stand on the giant’s shoulders of Twitter Bootstrap – replacing the good work that Blueprint served the site since 2007. Blueprint was great as a reset and grid system, but came before responsive design had been invented and would have required an m-site (remember those?). TBS 4 is about to come out but it’s not even in RC yet, so I chose TBS3 which I’m relatively familiar with. (The only thing I don’t like about TBS is it comes with “style” which you have to get rid of, rather than it being a purely vanilla reset and grid framework.)

One of the great tools we used at Deepend was BrowserSync which upgrades you into the robot octopus required for responsive testing on multiple devices. It automatically reloads the pages after you’ve edited the source, but also sync’s the navigation and even scrolling across all devices – it’s quite amazing to see it working.

Screen Shot 2015-11-28 at 10.46.53 am

While pondering a new front-end build, I realised I’ve now changed allegiences from Grunt to Gulp. I was a great fan of Grunt, so the transition was hesitant but there is a certain beauty and simplicity to the concept of Gulp in which I’m more keen to invest time (than learning more ad-hoc config formats). I’ve been using it recently with a node.js/redis application (SciWriter – coming soon!) and it just feels more like an integral part of the system, being in Javascript and allowing interopability with the server codebase if required. Also the logo is far less frightening.

I was pleased to see there is now an official version of Bootstrap w’ SASS, (rather than the previous third-party version), as I’m more a fan of SASS than LESS. To be honest I can’t remember the details of why now, but after a couple of years of trying both in dozens of projects at Deepend we all plumped for SASS as the marginally superior platform.

To get SASS building in Gulp, I ditched my previous ally Compass for gulp-ruby-sass. I found it relatively tricky to wire up the SASS build as the twbs/bootstrap-sass documentation has myriad options including combinations of Rails, Bower, Compass, Node, Sprockets, Mincer, Rake… aagh what! But after thinking it through and a short walk around the block I found gulp-ruby-sass was the right choice for me – as I am using Bower and Gulp.

Once the set of dependencies and technologies were chosen, the actual install ended up quite straightforward:

  • update/install Ruby, Node, NPM etc.
  • install Bootstrap with Bower
  • install Gulp with Node NPM
  • install Browsersync and Ruby-SAS into Gulp

I set up a src folder in the site with some news .gitignore’s for bower_components, sass_cache and node_modules, and then created a JS and CSS build in the gulpfile.

As I am migrating an existing site, I decided to use the SCSS format (rather than SASS). The great thing about SCSS being a superset of CSS is that I could just drop the original 2007 motif.css (designed over Blueprint) into the src/scss directory and start migrating to the new site. I much prefer a format closer to CSS and I am not much of a fan of oversimplified syntax transpilers such as Coffeescript. It just feels like yet another language to learn, and takes your knowledge further from the the true W3C stack – all for a few braces?

Now I was ready to splice the BS3 “starter” template header into the site’s header view template, fiddle around a little with the JS/CSS imports and see what the site looked like for fun… I was actually pretty amazed to see the site looked relatively intact and was already responsive! I believe this is testament to the semantic markup approach of both BS and my previous work on the site – the old and new CSS didn’t conflict directly, but intermingled relatively harmlessly.

Now the job was to go through the original CSS and HTML finding any specific classes (and div structures of course) for Blueprint or my custom elements like rounded corners from years before border-radius. (I did chuckle when the major browser finally implemented border-radius and box-shadow – just in time for flat design.) This was the “easy but long” task, after the quick wins of importing such power from all these great frameworks.

I am truly appreciative of tools such as Bootstrap, Gulp, Bower and SASS. Over the 30+ years I’ve been developing I have implemented similar frameworks or solutions for myself or my teams, before they existed publicly. I know how hard they are to get right. It’s a real pleasure to use well designed tools built by people who really know what they’re used for. Plus it’s a relief not to have to build it myself again as languages shift in and out of fashion! (Ah the memories, that old Perl CMS… countless templating systems… the time we cleverly named “Deepstrap” then immediatley regretted Googling the name for trademarks.)

Getting BrowserSync to work perfectly took a couple of attempts. I saw the “inbuilt server” wasn’t useful to me as I have a CMS and backend and it only serves flat HTML. So I tried the proxy, but it replaced all my nice SEO URLs and local domain with simple IP addresses, which defeated the routing. So I eventually built the snippet injection into my application itself – i.e. my web application is now “Browsersync Aware”.

To do this, I first added a controller parameter to enable browserSync in a session, but then also configured it to be always-on in the DEV deployment (avoiding having to enable it in many devices, but still allowing occasional debugging in production). My body template is now rendered thus:

<body <?= isset($body_id) ? 'id="'.$body_id.'"' : '' ?>>
<?= isset($browserSync) ? '<script async src="//'.$_SERVER['SERVER_NAME'].':3000/browser-sync/browser-sync-client.2.9.11.js"></script>'  : '' ?>

The gulpfile is still evolving, but this is how it currently works. Everything is built on-change via watch and deployed directly to the site directories.

// MOTIf Front-end src build - Gulp file

// Define base folders
var src = 'src';
var dest = '..';

var gulp = require('gulp');
var concat = require('gulp-concat');
var rename = require('gulp-rename');
var uglify = require('gulp-uglify');
var sass = require('gulp-ruby-sass');
var debug = require('gulp-debug');
var browserSync = require('browser-sync').create();

// JS build
gulp.task('scripts', function() {
 return gulp.src(src+'/js/*.js')
  //  .pipe(debug({title: 'debugjs:'}))
  .pipe(rename({suffix: '.min'}))

// CSS build
gulp.task('sass', function() {
 //return sass(src+'/scss/**/*.scss', {verbose: false})/* NB: glob fixed a frustrating "0 items" problem! */
 return sass(src+'/scss/styles.scss', {verbose: false})// prevent multi-compile of includes in this folder - it's a pure tree.
  .on('error', function (err) {
   console.error('Error!', err.message);
  //.pipe(debug({title: 'debugsass:'}))
  .pipe(rename({suffix: '.min'}))

// hawtcher bee watcher
gulp.task('watch', function() {
  notify: false // the "connected to browsersync" message gets in the way of the nav!
 gulp.watch(src+'/js/*.js', ['scripts']);
 gulp.watch(src+'/scss/*.scss', ['sass']);
 gulp.watch('../system/application/views/**/*.php').on('change', browserSync.reload);
 gulp.watch(src+'/images/**/*', ['images']);

// Go!
gulp.task('default', ['scripts', 'sass', 'watch', 'browser-sync']);

Another useful responsive developer tool is the Chrome device-simulator which performs viewport and user-agent spoofing. However be warned that it doesn’t accomodate the extra cruft the actual device browsers incur such as address bar, tabs, status bar etc. so the actual viewports will be significantly smaller. Real device testing is still the only way to be sure, but paid services such as BrowserStack can also help automate this.

Screen Shot 2015-11-28 at 10.37.49 am

There’s still a way to go with the redesign. I’ve only redesigned the public-facing pages not the inner areas where the tests are done, but I’m pretty pleased to bring the site (almost) up to date, and therefore to allow the experimenters to administer these tests on more convenient devices.

With over 5,000 registered professionals and 12,000 children tested so far, the site has gradually become a valuable resource to many teachers and clinicians. I want to ensure it’s kept usable and useful into the future, for the next generation of kids who’ll need help with learning to read and write.

Screen Shot 2015-11-28 at 10.33.26 am


Posted in Projects | Tagged , | Leave a comment

UK Archeaology Project 08/2015

I’ve always had a love of history and archaeology, but only ever dabbled, mostly during house renovations in the UK and Australia, but have always worried about not doing it properly.  While on a trip to the UK in August 2015, I made the effort to learn a bit more of how to practice the discipline of archaeology while being an amateur, and found it’s not that hard to really do things properly without risking permanently damaging or losing vital information or physical evidence.

My mum had spotted an advert in the local rag for an annual open “Pub Dig” by Liss Archaeology which was fortunatley a couple of days after we arrived had got over our jet lag – and on a beautiful summer’s day. So we signed the health & safety forms and started to get trained by the friendly local group who’d been asked to investigate a C17th pub rumoured to have once harboured a couple of sailor-murdering ruffians who were promply hung and left in gibbets on Box Hill for the crows.


Helen, who studies weaving in iron-age roundhouses for her PhD, at Butser Ancient Farm practical archaeology research site, showed us how to clean up the sides of a trench for recording, and how to slowly lower the floor level to a specified horizon.

Before we knew it, we were uncovering pot sherds from Saxon and iron-age! I honestly do know how fortunate we were to have this amazing experience, as finding anything Saxon other than a smudge in the ground, or relics from the even more ancient prehistoric bronze/iron age is rare even for those who dig every day… we were truly stunned.

Ever since I’ve known Gen, she’s wanted to find clay-pipe. It’s become a bit of a mantra while we’ve been renovating our 1870’s Sydney cottage – everything but clay pipe! Coins, letters, lace-gloves, tools, marbles, bullets, newspaper, books, shoes – all from the 1850’s onwards but not a pipe in sight. When we visit The Rock’s museums, we smugly point at all the various items on display and say “Got one, got two, got one, got three…” but when it comes to clay pipe, there’s just cicadas.

As we first walked up to the Liss site director, a guy came over and said “Hey I just found this clay pipe in the spoil heap!” and Gen’s eye’s popped out. They had found a fair few peices, including two bowls which were identifiable by their dimensions, and the floral icon on the side of the base. Interestingly, we “redeposited” all the broken stems back into the trench while back-filling later, along with all the other non-essential items.

They were a lovely team, Chris the pottery expert, Dave “the boss”, Graham “the seive” and others who’s names I think I also reburied. All very welcoming and patiently forgiving of a noob like myself who kept handing them stones and even leaves excitedly asking “what’s this!”.

So full of renewed vigour we head off back to my parents “new” house, with thoughts of what we might be able to find there. My folks move every 2yrs on average (over a 50yr sample – a very reliable statistic), so seeing as I haven’t visited for some 28months – there’s fresh turf to investigate!

Last time, it was a relatively new house (1950’s) but still on the grounds, there was a 1920’s “midden” or dump, which contained some classic items such as glass jars, medicine bottles, enamel-ware etc.

Previously, they lived in a “church cottage” built around a Tudor chapel, where I first bought “them” a metal-detector and found the remains of a cow-shed which had wartime relics deposited including a home-guard helmet. I also found Tudor green-glaze pottery and slag.


This time it’s again a 1930’s house, but built on older land in a historic area. So what will we find?

First off, I dusted off the metal detector, but after “doing it properly” with the team from Liss, I didn’t feel so inclined to rush in digging up stuff randomly. I did a couple of spot digs, and found a plated metal star, possibly from a chandelier. I also found an iron ring, which normally I’d ignore using the detector’s iron-discriminator, but it seems that a ring still triggers the detector when face-on possibly due to strange currents or EM signals in the ring itself. Well lucky for that – because as soon as I had turfed a foot square section to look for the beeping item – a peice of clay-pipe just popped out! Then another! Gen was beside herself.


So I thought about it, and closed up the turf without further disturbance. I want to do this one as properly as possible without a degree in archaeology, but armed with the power of the Internet and 6 weeks of working/holiday to allow me to take my time.

On a side trip to Oxford, to stay at one of the colleges with Gen’s professor associates, we stopped off near Stonehenge at a white-horse carved in to the hillside. As we were circling the iron-age hill fort, I said to Gen “I’ll just have a look at what the moles have brought up” – a sneaky way to find items without digging yourself. And sure enough  – sticking out of the very first molehill I looked at: clay pipe. Gen just couldn’t beleive it!


Looking around a few more, we found various items, including more modern ceramics, burned ore and – I think – Saxon pottery. Of course this was an extremely sensitive and ancient site, so we barely touched anything, taking some photos and pushing the items back into the molehills, safely out of sight but only millimetres from whence they naturally emerged.

I woke up the next day back at my parents with one word in my mind: resistivity. The guys from Liss had been using a resistivity meter for their geophysics because magnetic would have been pointless in a pub garden which had been used as a building site, allotment, carpark etc. for 1,000 years and radar is beyond the budget of a local volunteer team. They had found some dark patches where chickens had fertilised the ground for donkeys years, and the resulting foliage was holding more water. They found an exciting looking circle but it turned out to be a featureless sand pit. They also found a roman-villa looking set of stripes, which turned out to be a house-less Edwardian garden.

As I woke up I remembered “Scruffy” Evans teaching me about resistivity at school, and I suddenly decided I could build my own geophys survey device! I had the time, the patience, and the tools. The Internet had the knowledge, as unfortunately (sorry Scruffy) I had briefly archived the definition of resistivity.

In my Dad’s garage I found everything needed to build a simple resistivity meter!

  • a broom stick and dowel
  • junction terminals
  • some long brass screws
  • some very long steel screws (v2!)
  • velcro, cable ties

I made a simple H frame by doweling the rods into the broomstick, gluing and screwing with brass screws, Then set the long screws poking through downwards at 1m spacing. (It’s important to use standard spacing as there are tables developed over years to help interpret the results.)


The probe screws were simply connected to an Ohm-meter (a multi-meter) with some old mains wire I found. I also ground off the tips of the screw threads with a bench grinder, to make clean points.

ResistivityMeter2 IMG_6982

That’s it for v1! (This is super-simple-basic, it’s not even the Wenner model – missing the middle voltmeter. I’ve got plans for v2,3,4… but I’m just really interested to see just how simple you can make a meter, and what sort of results you practically get.)

Then I “set out” the garden. I chose a good reference point for 0,0 which was in line with the corner of the house – something future people could replicate. I measured out 1m intervals in x,y direction and used string-line with 20cm and 1m markings as a guide.

First I tried a couple of runs of 12m, probing at 20cm intervals and recording the resistance on paper. After a while I found that the 3″ brass screws weren’t quite long enough to reliably penetrate the soil through the bouncy grass, so I removed them and used some 6″ steel screws. Brass would be better as it won’t corrode, but this is only a 6-week project.

Interestingly the next few runs gave significantly different results – lower readings by about 30%.

So of course I wanted to visualise the readings, but luckily being a programmer by trade, I was able to write my own custom software to display the results. I could probably have just used Excel/Google charts, but I do plan to take this further, so I invested a bit more time in building a mini framework to allow, for example, to add in image processing such as gaussian smoothing or the aforementioned depth charts (which I don’t understand yet!).  I’ve recently been experimenting with a little Backbone+Bootstrap+Sass+Gulp setup, so it only took an hour or two to invest in making a basic Canvas rendering application with a decent model/view architecture (rather than just hacking some JS mudball).

I have to say seeing the first tantalising pixels was pretty exiting…

Screen Shot 2015-08-15 at 12.51.26 am

It was drizzling outside, and so I made the excuse of coming back in after each couple of runs to “dry off” and type in the 60 figures just recorded on a wet peice of paper pegged to the device.

As time went by, I could see diagonal lines and probably some other pareidolia so needed to get rid of the noise.

Screen Shot 2015-08-15 at 12.50.27 am


I rememberd a few bits from my AI degree computer vision course so I knocked up a simple Gaussian filter. The results were pretty good!

Screen Shot 2015-08-14 at 11.19.06 pm

There has been lots of speculation about mazes, gardens, diamond-shaped cattle pens or just plain sewers. I guess we won’t know until we put a test pit in across one of the more prominent lines.

I must say, I’m pretty impressed so far with the results acheived “for free” with just a multimeter and a broomstick. I think this might be a good template for a project any kid could do in their garden in a weekend, so I will try to write up the “blueprints” and open-source the software.

Tomorrow I hope to do either a test pit, more resistivity at the darker area at the top, or the various metal detector points I’ve been ignoring. Also a friend of my folks apparently has a roman road under their house! But hold on… there’s plenty of exciting opportunities here – but I’m determined to proceed carefully and to collected and document the contextual information for future generations.

Stay tuned…

Posted in Maker, Projects | Tagged , , , , , | Leave a comment

Mac’s Maker Project 2011 – A Junkpunk Chandelier

For our annual visit to Mac & Helen’s in 2012 we thought we’d try to make a chandelier. We wanted one that would fit with the style of our old house which has a certain combination of simple, basic, slightly rustic Georgian-Victorian with a beach shed!

The previous C19th owner was a stonemason and a metal worker, so there was many examples of simple chunky ironwork and tools around the house (see my home reno blog Domus Renovatio!). We’ve progressed this mixture with a basic and slightly industrial feel with stainless steel and iron in the kitchen and utility rooms, using warehouse lights, led-upgraded kero lamps, and added a mixture of simple chandeliers around the older part of the house. So there’s a theme but a variance – a different light in every room!

However we had one missing. We needed a chandelier for our dining room which already features Mac’s iron and wood furniture, so we were thinking along those lines. We wondered about something steampunk (without going too thematic).

When we explained the concept of steampunk to Mac he went one better. Junkpunk!

Over the years he has collected a huge array of materials from various places including a 1920’s car found in the bush near his house. It’s pretty much rusted and decayed, but a few components were salvageable, e.g. some lovely big iron cogs! As we played around with the library of rust, a design started to form in Mac’s mind…


Posted in Maker, Projects | Tagged , , , , | Leave a comment

Pip’s Maker Project – Stirling Engine #1

Yesterday I received a long-awaited package from overseas: my first Stirling Engine! I became fascinated by them a few years ago, when I discovered this “long lost” invention from 1816 which I think when combined with modern materials and manufacturing could help us reclaim some of the wasted energy produced by todays myriad of heat-producing devices and machines around the home, office, industry, agriculture and even in space.



I’d always wondered why we couldn’t reclaim the heat from the back of a fridge for example. It’s energy isn’t it? But it turns out the problem is the potential energy is very low, because the temperature difference between the room and the heatsink is pretty small – not high enough to power most energy-producing devices. At the end of the day, even futuristic-sounding nuclear power stations are just big steam-engines which use radiation from uranium instead of coal – but they require lots of radiation.

This is where Stirling Engines come in. They run from a very small differential of heat between two plates (or a heat-source and a heat-sink). Of course a very low differential means there is only a small amount of energy – but there are hundreds of millions of fridges. It definitely all adds up.

Living in Sydney in an old sandstone cottage with a tin roof, I also am (literally) painfully aware of how extremely hot our attic gets, while the cellar is always cool. Surely this differential would be enough to – power the air con, or at least a fan to get the air moving passively. Air conditioning is one of the biggest “wasters” of energy and should be the target of a radical rethink. I firmly believe that rethink should involve a more holistic passive house design and incorporating stirling machines into the mix would add a unique hi/low tech advantage – with none of the usual disadvantages of human invention.

Stirling machines are very simple, use no fuel (except heat), need no lubricant, emit no pollution, make no noise, run for “ever” unattended, look beautiful and cool and inspiring… it’s pretty unbelievable isn’t it?

So what’s the catch? Well there’s no catch really, except perhaps it’s hard to make a really powerful one but that’s not their niche. I’ve just bought a book on how to build a 5HP model which can’t be sniffed at.

So here’s a video of me putting together my first kit:

Watch it on YouTube

Next – I’m thinking of building a ventilation pipe between my cellar and attic and using a stirling engine – powered by the 20-30′ heat difference – to help power fans to cool the upper house and dry the sub floor.

I’d also like to see how much power I can get from the back of my fridge and investigate the Seebeck and Peltier effects which are used in thermoelectrics to similarly generate power from heat differentials and shunt heat around.

But for the time being, I’m going to put my lovely brass flywheel’ed engine on something wastefully warm like my laptop charger at work, and use it to inspire and pass on the infectious Stirling fascination.

Posted in Maker, Projects | Tagged , , , , | Leave a comment

The UI is the system – most people think

A common developer mistake is to assume clients, stakeholders, mums actually understand the multi-layered, n-tier, decomposed modular architectures they build.

I read a piece of wisdom once that has just resonated with me as I came to present a demo of a system I’ve proudly built – but the demo UI isn’t quite finished. I was planning in my head how to explain that it does more than you can see (there’s some obvious gaps), then remembered how stupid that sounds to normal people.

Developer:  Ok so here’s this amazing engine X we’ve built for you.
Client:  Great, so does it do Y?
Developer:  Oh sure, but there’s no UI for that just yet.
Client:  So… it doesn’t then.
Developer:  Oh yes it does, the core is extremely feature rich and the Z-layer, n-tier component bindings allow…
Client:  😐

An inaccessible feature, for all intents and purposes, not exist!

The correct answer is to impress how easy it will be to “finish that off”.

If in doubt, I use the building construction analogy: “All the foundations and walls are built, and we’ve finished the plumbing. We just need to fit-out and decorate before opening to the public.” From this a non-technical client can understand there’s still a bit of important work to do (possibly even functionality-related), but we’re over the hump and it’s more straightforward work from here, and will even start to look nice soon.

The more I use the building analogy for development work, the deeper I find it matches, from the importance of architecture and foundations, the sequence of events, roles, responsibilities – all have analogues. In fact rebuilding my house gave me some real insights into running a software team, and even how a mature industry such as construction could provide guidance to one so young as software engineering – like universal standards for example!

While this kind of cross pollination might be far fetched, you’d have to admit a building without any doors or windows is just as useless to a person as a software feature with no interface.

Posted in articles | Tagged , , | Leave a comment

Mac’s Maker Project 2009 – Copper Necklace Tree

Over new year 2009 up at Helen & Mac’s workshop near Perth, Western Australia we decided to make a present for Gen – to organise all her millions of necklaces which keep getting tangled up in boxes and drawers.

I’d like to present our methods and progress as a simple photo-story without me rabbiting on and making dad jokes.

See the full album of photos on Flickr…


Posted in Maker, Projects | Tagged , , , | Leave a comment

Mac’s Maker Project 2008 – Wooden Tessellated Coasters

On my first trip up to Helen & Mac’s place near Perth Western Australia, they showed me the workshop and my eyes popped out of their sockets. Not only a huge well-stocked woodworking space but a forge too, and all set in a beautiful rural scenic location!

So Gen said, sure dad will be happy for you to make something in the workshop over the break…

My initial idea

Coasters 0 MC Escher


Originally I wanted to make coasters that fitted together similar to the MC Escher bird-fish intricate tessellated design but being a bit of a noob, I hugely underestimated the challenge.

Fearing for Mac’s heart and not having a spare month, we together planned a design which was pleasing, tessellated and could be manufactured on the machines available (e.g. no jigsaws).

Serendipitously while planning I coloured-in alternating tiles to get a better idea of the outline and realised a two-colour set would look great!

Coasters 1

So we got to work with some Jarrah and She Oak offcuts, using bench circular saw, thicknesser / bench plane, a beautiful old bandsaw, several sanding machines, and a 7m belt sander.

Coasters 2 Coasters 2b Coasters 2c Coasters 2d

> See the full sequence with descriptions on my Flickr album “Tessellated Coasters”…

TLDR; These are the finished coasters, waxed and drying.

Coasters 3

Here are some of the interesting patterns and visual illusions we discovered you could make, over a gin of course!

Coasters 4

Coasters 6

Coasters 5

Coasters 7

> See the full story with descriptions on my Flickr album “Tessellated Coasters”…



Posted in Maker, Projects | Tagged , , , , | Leave a comment