tag:blog.luketurner.org,2013:/posts veni vidi cogitavi 2016-12-30T05:14:28Z Luke Turner tag:blog.luketurner.org,2013:Post/1119118 2016-12-29T23:26:57Z 2016-12-30T05:14:28Z Introducing lpass-add and lpass-env: ergonomically get secrets out of LastPass

Today, I published two scripts that wrap the LastPass CLI lpass. These scripts are very handy if you are storing non-password credentials, like environment variables or SSH keys, directly in your LastPass vault.

lpass-env - Enables you to easily read environment variables straight out of LastPass Notes fields and into your shell session. The idea is that instead of putting your variables in your .bash_profile, you add them to LastPass and use lpass-env to load them into sessions only if they are required. This way, the credentials are not stored on disk or exposed to other scripts running in your terminal windows.

lpass-add - A wrapper around ssh-add that reads private keys from LastPass instead of from a file on disk. This is intended for private keys that do not have a password, since it allows you to store the whole private key in LastPass directly. If you want to store your private keys on-disk and have passwords in LastPass, use lastpass-ssh instead.

Both utilities should be well-documented in the repository (I just spent an hour writing READMEs that are far bigger than the scripts themselves, which is why this post is so short -- check the repositories for more details).

Why LastPass?

Surely there are other, better ways and means for protecting this information, instead of Bash scripts and LastPass notes? Well -- I thought so too at first, but so far, nothing's come up. LastPass does have "SSH Key" note types, but they are inferior to just using generic Notes because the SSH Key fields aren't multiline and therefore can't hold actual SSH keys in them.

In operations, these secrets are ideally stored in special secret management solutions like Vault. These (presumably) works great once they're set up, but they add significant overhead for some things that LastPass makes easy (like authenticating on different PCs and sharing passwords with authorized persons). In the end, setting up a separate "personal secret management" solution seems like overkill. If you are already using and trusting LastPass, I figure you might as well keep using it for as much as possible.

As for 1Password, Dashlane, and other LastPass competitors -- they might be good options, too. If you aren't using any password manager at all yet, I recommend you think carefully about what you want from yours and research what's available. Picking a password manager is like getting married without a prenup -- you can break it off with them later, but it won't be fun or pretty. So choose wisely.

Finally, maybe you're wondering why you shouldn't just stick all these things in plain-text files on your disk like everyone else. Why use an encrypted secret-management solution at all, since my filesystem is already encrypted on-disk? Actually, I don't know the answer to this one. All I can say is that storing password-equivalent secrets in loose files on my system doesn't feel right to me. Even if they don't get compromised, they can easily be lost or destroyed by mistake. Storing them in LastPass makes sure that they are (theoretically) secure in the cloud, away from accidental corruption or loss.
]]>
Luke Turner
tag:blog.luketurner.org,2013:Post/845066 2015-04-22T21:27:57Z 2015-04-22T21:45:23Z Unidirectional data flow architectures in JS

A new breed of Javascript framework is emerging that emphasizes unidirectional data flow and reactive programming. These frameworks/architectures, like Flux, its derivatives, and my favorite, re-frame, are billed as ways to escape the so-called "callback hell" of the event-driven async programming model by simplifying and explicitly describing how state changes propagate through the application. Of course, the ideas these architectures use aren't exactly new -- they've been used in GUI development for a long time. But they are starting to be rediscovered in the single-page application design sphere, which is still a fledgling field (relatively speaking).

I have fallen in love with unidirectional data flow as an SPA design pattern. But at the same time it's clear that these architectures are so simple that using a 3rd party framework is usually overkill. Even Flux does not come with a framework, only example code (although it has been implemented as a framework by many other people). In order to be able to construct our own UDF applications without using a framework, we need to figure out what it is that characterizes a UDF architecture. Toward that end I have been thinking about the following "attributes" or characteristics of a unidirectional dataflow architecture.

These are not meant to be prescriptive or comprehensive. They are just my attempt at describing the current state of UDF architectures in SPA development, based on my own observations. But I do think that each attribute has benefits that it brings to the table, and all four of them work together to make an application simpler and more stable. Combined, they lay a flexible and powerful groundwork that your application can build on.

  1. Centralized data store contains all state
  2. Events are managed by a dispatcher
  3. Handlers update state (or raise events that do)
  4. The UI is a function of the application state

Centralized data store contains all state

A major paradigm shift in these architectures is using a central store for all application state. In Flux, the group of Store components together contain all the application's state. In re-frame there is an app-db hash table that contains everything. The important thing is that the application state is centralized. If all application state is collected into a single "in-memory database", this means the rest of the application can be stateless.

Compare this with an object-oriented event-based architecture, where components manage their individual state and hide it from others. This means that each component becomes responsible for watching its own internal state and triggering state change updates if it's modified. Also, since each component is stateful, we have to be very careful about how we handle updating or recreating component objects, and we can't reason about state updates in general because there is no single thing that corresponds to a state update which can be observed or logged or analyzed.

If all application state is in one place, we have a lot of power. We can easily implement undo and redo by keeping a history of state updates, which is much harder if components have internal private state. We can easily know when to redraw the UI by watching for state updates. We can easily implement saving/loading of the state, which, for example, lets us resume user sessions at exactly the same spot when they revisit the page. We can also avoid situations where shared state is stored in two components, unnecessarily wasting space and causing potential synchronization issues, or where one component wants to read the "private" state of another component. There's no privacy among friends, and we're all friends here, right?

It's important to note that we are not just talking about long-term state. Transient, interface-related state (like which todos are selected for editing, or what prompts or error messages should be visible) are also stored in the same global structure. This is the type of state that tends to be privately held by the component diaspora in an object oriented architecture.

Events are managed by a dispatcher

Events -- like a user interaction or an asynchronous process completion -- are not allowed to propagate willy-nilly. This prevents some types of issues: you should never accidentally capture state in callback closures, callbacks are never nested, and business logic is centralized. Although we can't get away from having to attach event listeners or promise callbacks because we are stuck with the Javascript browser API, we can make them as simple as possible. Event listeners or callbacks do not include any logic and they do not accidentally enclose any state. All they do is immediately hand off their event to a centralized event handling switchboard, which Flux and re-frame both call a dispatcher. When the user hits Ctrl+S, the listener doesn't validate the current state, or make an AJAX call, or attach another callback handler to update the UI when the save is complete. All it does is inform the dispatcher, "Hey: I'm raising a 'save' event". Events passed to the dispatcher usually have an event type and possibly an event value which may be captured by the closure.

// all event listeners look like these
function (e) { dispatchEvent({ type: "save" }); }
function (e) { dispatchEvent({ type: "selection", value: selectedThing }); }

The dispatcher is responsible for deciding what to do with incoming events, but usually it farms out this decision by dispatching the event to registered handlers. In Flux, the dispatcher informs the Stores about the event, expecting them to handle it by updating their state. In re-frame, the dispatcher calls a handler function instead, which is able to update the application state directly. Of course, the dispatcher is completely stateless (in terms of application state -- it may have a dispatch table or something similar, but we do not expect it to be observable).

There are surprising benefits to using a layer of indirection (the dispatcher) over directly attaching the handlers as event listeners. The handler is not directly defined or called within the context of the event listener attachment, so the only information it receives is the event object itself. This also means that callbacks cannot be nested since each callback will just raise an event whose handler is defined elsewhere, which keeps code "flat". The dispatcher itself gives us an opportunity to add logging or state tracking or throttling or validation over all events. Finally, handlers are completely separate from the DOM, so events cannot (easily) sidestep the architecture by directly manipulating the UI.

Handlers update state (or raise events that do)

Once the dispatcher hands off an event to a handler, there are only two things that a handler can do. It can raise another event (or begin an asynchronous action that raises an event when it completes), or it can update the application state. Of course it could also do other things along the way; handlers can and should contain most of your application's business logic. But in the end, if they don't update the application state or raise another event that eventually will result in a state update, the handler might as well be a no-op.

If the handler doesn't seem like it updates the state or raises an event, but it still affects the application, be careful: you may have stumbled on some hidden state that should be dealt with. If possible, exhume the state and inter it in the centralized store.

On a related note, handlers should be the only things that update your application state. When you think about it, there is no other place for it to really happen, as long as the views don't trigger state updates, and you don't use two-way data binding or event listeners that are not handled by your dispatcher. The fact that the handlers are the only things that update your state is what makes the application's data flow "unidirectional". Data flows from the state, to the views, to the dispatcher, to the handler, and back to the state. If you lose this property, it becomes much harder to think about your application. Repeat after me: only handlers should modify application state.

The UI is a function of the application state

The application UI should depend on the centralized application state -- for example in Flux the views can depend on one or more Stores. The UI cannot depend on anything else. This rule makes it simple to know when you should re-render the view: do it every time the centralized state was changed (by a handler, hopefully). Of course, we don't always want to actually re-render the whole UI whenever the state changes. Since browser reflows are expensive, we don't want to trigger them by updating DOM elements that don't need to be changed. Usually, only a tiny part of the UI needs to actually be changed when the state is updated.

There are different ways to handle this issue. A reactive programming strategy would be to model the central state as a group of streams (a.k.a. observables), so each component can listen to only the streams it cares about, and redraw itself when a new value comes from upstream. This is conceptually similar to the approach taken by Facebook's Relay and GraphQL library combo.

Another powerful solution in vogue is the use of a virtual DOM. Instead of directly modifying the DOM, your views just return a "virtual DOM", which is just a data structure that represents the DOM but which can be updated without triggering UI reflows. The virtual DOM your view returns is "diffed" with the previous virtual DOM to determine what has changed. Then those changes are "patched" to the actual DOM in a way that minimizes the reflow cost, for example, only changing the text inside a single cell of a table instead of removing and re-creating the entire table. The virtual DOM serves as an intermediary layer which allows the application to freely trigger a re-render of the entire application on each state change, safe in the knowledge that the actual expensive DOM manipulation is minimized even if you are re-creating the virtual DOM tree each time.

Libraries

UI rendering is probably the only place where using a library is almost a necessity -- if you don't use one you will end up writing one yourself. Flux recommends React. Re-frame uses Reagent, which is a ClojureScript wrapper for React. Relay and GraphQL are built to be used with React. But React is not the only option. Other virtual DOM providers include the unassuming virtual-dom and the lightning fast mithril. Even Ember is working on a virtual DOM rendering engine called Glimmer in order to take advantage of the dramatic performance improvements that a virtual DOM can provide. The library du jour changes over time, but it's not that important to pick the latest and hippest. It's more important that the view layer supports the UDF paradigm, which means using one-way data bindings only, and that it is able to perform fast enough that the user experience isn't disturbed, whether it be through use of a virtual DOM or some other performance enhancing technique.

Next episode: the ultralightweight antiframework

Things like Flux and re-frame are designed for large applications. But I am interested in getting all the benefits of UDF even for relatively simple applications where a Flux-style architecture, whether from a 3rd party framework or not, would be plain overkill. My next post in this series will discuss an ultra-lightweight application design pattern that is suitable for getting the benefits of UDF even for very simple applications. We will arrive at the design pattern by looking very skeptically at every component in a Flux-style architecture. What is unnecessary for a not-Facebook-scale SPA? What components can be simplified, and how? In the end we will come to find that with a little forethought, a UDF architecture can be baked in to your SPA with hardly any effort and with practically no "framework-style" glue code at all.

]]>
Luke Turner
tag:blog.luketurner.org,2013:Post/778069 2014-12-03T15:49:29Z 2014-12-03T16:09:19Z Javascript task running: Gulp and Browserify recipe (with optional transforms and uglify)
Browserify is a Javascript tool that can "bundle" a modularized application into a single app.js file for the browser. It lets developers use Node.JS style modules, declaring public APIs using module.exports and including other modules with require(). Then Browserify will wire the modules together and bundle everything up into a single file for easy Web browser inclusion. No more having to think about <script> inclusion order.

Gulp is a task runner, like Make or Rake. The main advantage of Gulp is that Gulpfiles are written in Javascript which means you can leverage server-side Javascript APIs, like npm modules or the Node.js STL. Also, it can do file watching. But that's for another post.

Anyway, since Browserify has a Javascript API, we can easily incorporate it into our Gulpfile. There is a gulp-browserify plugin and a gulpify plugin that are both designed to make this as easy as possible, but they're both deprecated. Basically, the browserify API returns a source stream to begin with, so it's already almost compatible with Gulp's streaming build model. The only thing we need to do to is convert the browserify stream into a Vinyl file stream using the vinyl-source-stream microplugin.

For now at least, this is what a Gulp browserify task "should" look like:
var gulp = require('gulp'); 
var browserify = require('browserify');
var source = require('vinyl-source-stream'); 
var streamify = require('gulp-streamify'); // optional
var uglify = require('gulp-uglify'); // optional 

gulp.task('browserify', function() {
    browserify('./src/client/app/main.js')
        .transform('coffeeify') // optional -- do browserify transforms
        .bundle()
        .pipe(source('app.js'))
        .pipe(streamify(uglify())) // optional -- minify the output for production
        .pipe(gulp.dest('build/client/'));
});

]]>
Luke Turner
tag:blog.luketurner.org,2013:Post/738802 2014-09-09T17:23:31Z 2014-09-09T17:23:31Z The value of a sentence
Succinctness is an underestimated virtue in nonfiction: too many books use a page where a sentence would do.  Self-help books and pseudo-motivational books are especially remiss, which is to say those entire books can be reduced to almost nothing. That's not to say that the book is not valuable; instead, I mean to say that a sentence can be even more valuable than a book.

But what could I mean by the "value" of a book? In short, a nonfiction book exists to transmit knowing from writer to reader. This is different from merely presenting facts: as any math student knows, a book can be dense with facts yet leave the reader vacant of knowing. On the opposite extreme we have self-help books that are nearly devoid of facts, but are attempting to transmit a specific knowing to the reader. They threaten, describe, exemplify, and cajole, in an attempt to pass on their knowing. And yet they seem to be typically not very good at it. How many people read a powerful self-help book, feel an epiphany, and believe they have been taught, only to wake up in the morning indifferent again, as if from a dream?

A week after this experience, the reader perhaps remembers the feeling of epiphany with some quantity of embarrassment. Even after such a moving self-help book, why have they fallen again? What can they even remember from the 200 pages consumed so readily? Probably nothing more than a single vague concept. This is the seed of the unifying truth; the kernel of knowing. The entire remainder of the text, so readily devoured, was just a mechanism designed to effectively transmit that knowing. And yet, looking back, it didn't seem to work.

The problem is that the self-help mechanism is rarely effective: often self-help books contain a powerful idea that, if known, could change an outlook completely. But this kind of idea meets a lot of resistance in the mind of the reader. This is completely unsurprising: paradigm shifts are never easily perpetrated. Epiphanies are an illusion. They are excitement mistaken for realization. Instead such a powerful idea takes practice and careful thought to take root and become knowing. Gradual change is the only effective method for self-transformation. For this reason, the book rapidly consumed delivers nothing that you can't get from a single sentence, delivered from authority. It simply plants the seed of change. No matter how much motivational material is packed in, a serious transmission of knowing cannot ensue immediately.

There is another factor in play, hinted at above, which is central to the issue. A successful transmission of knowing is the responsibility of the reader as well as the writer. The reader must accept the knowing. However, this process is very slow. Through patient openness, a reader can slowly internalize the truth they have been told. Our mind's digestion is far slower than our stomach's. 

Now the problem is that self-help books and similar texts are not designed for this calm, meditative digestion. Instead they whip the reader into a cognitive frenzy. They try to shock the reader into an ecstatic realization. They promote epiphany. This is not a productive strategy! Instead, consider the humble yet lovingly constructed sentence. It is dense with meaning. It grabs at our imagination. It is concise enough that we can memorize it and refer to it often from within our own mind. How often does a reader reopen a self-help book? Maybe a dozen times at most, and yet the sentence can be turned over a dozen times in an hour.

And so a sentence -- a single cutting thought -- can be more valuable than a book. It is a form that is well-suited to transmission of knowing. It is memorable and so it reminds the reader of the truth, again and again. Let a flood rush down a mountain, and it overruns the land and quickly dissipates, but a small mountain stream slowly wears a new pattern into the rock.

Of course, a sentence can't be published, and many misguided readers in fact seek that epiphenatic sensation, like an addict seeking a high. So the self-help book is perpetuated. But I urge you to carefully think about such books as you read them. Don't discard the book entirely, but seek the true sentence behind the mountain of platitude and anecdote. Find that sentence -- that true idea -- and reflect on it again and again. Patiently let it percolate. Distrust epiphany.
]]>
Luke Turner
tag:blog.luketurner.org,2013:Post/738362 2014-09-08T19:18:43Z 2014-09-09T14:45:11Z Project retrospective: PersonalPVT
PersonalPVT (sitecode) is a Web application for doing psychomotor vigilance testing. The actual test is extremely simple: numbers pop up on the screen and you hit the spacebar (or mouse button) in response. The speed of your responses can indicate your level of sleep deprivation. Implementing a PVT tool is easy; providing meaningful views into the resulting data is harder. I spent about 1 month on this project, working mostly on the weekends.

For those interested in an alternative PVT application more oriented towards researchers performing studies, there's PC-PVT. I have used PC-PVT and it's fine but it's not designed for individuals to easily get started with -- for one thing, you need to install the MATLAB runtime to use it.

From a technical perspective, PersonalPVT is interesting to me because it's a 100% client-side application; the server only serves static content. This is convenient because static files can easily be hosted for free using Github Pages. It's also really cool, because it means the application has more fundamental similarities to something like a Java app(let) than to a traditional Web site.

I guess some people might disagree, but client-side used to be a nightmare for Web developers. A 100% client-side application would have been a perverse sort of punishment, not a project requirement. But with the advent of client-side Javascript MVC frameworks, a 100% client-side application is conceivable and even easier than server-side MVC.

There are a lot of great client-side Web frameworks, but I have been really interested in trying out AngularJS, so I built PersonalPVT with it. I know some people have trouble with the terms or concepts in Angular, but they seem to click with me.

The project wasn't big enough that I could really feel the benefits of Angular's vaunted testability or run into the view rendering/updating slowness that I have heard about, so I can't comment about using AngularJS for very large projects. For a webapp this size, though, I think it's the best framework available today, especially if you use ui-router to handle the routing instead of $route.

Authentication is still a problem for client-side applications because it basically requires a shared (i.e. server-side) database to store usernames, (salted and hashed!) passwords, and session tokens or whatever. Typically Javascript applications will use HTTP requests to communicate with an auth DB. I got around this by using the browser's local storage to save settings and historical PVT data, which removes the need for any authentication. This approach matches really well with the idea of the 100% client-side program execution, but it also means you can't share your data across browsers, which can suck.

Finally, the charts in the application use Chart.js, but if I were to do it again, I would probably use NVD3. Although Chart.js works well at what it does, it lacks useful features like allowing different datasets to have different vertical scales, or allowing you to turn off horizontal scale labels but not vertical scale labels.
]]>
Luke Turner
tag:blog.luketurner.org,2013:Post/738395 2014-02-14T18:00:00Z 2014-09-09T14:42:48Z Maneuver warfare was agile before agile

I have spent some time working on an internal application to support Agile developers (sort of like Microsoft's TFS but without the enormous price tag, and closely integrated into our documentation and management systems). Because of this, I have done quite a bit of thinking about Agile and Agile development.

There appears to be a consensus that Agile is good. In fact, the opposing paradigm, "waterfall" development, is widely described as an accident, or an anti-process. Winston Royce, the Waterfall's erstwhile creator, presented the model and in the same breath stated "I believe in this concept, but the implementation described above is risky and invites failure." [paper]

However, the two models are arguably equally effective as long as the development environment is completely static: if new information never comes to light, specifications and requirements never change, and all details of implementation are known from the get-go, then the waterfall method and the agile method should have identical outcomes by the time the software is done (Agile development, however, should be producing useful stuff throughout the entire period as well, which some Agile evangelists will say can generate revenue. But once the development cycle ends, Waterfall "catches up" by producing the entire finished product. In theory.)

Of course, no real-world development occurs under such ideal circumstances. Change is inevitable — changing knowledge, changing requirements, and even changing personnel — and that’s where teams get into hot water. The waterfall method has no mechanism to respond to change.

Agile and non-agile development both follow the same general process:

  1. Collect information from users, managers, and other stakeholders about their desires, expectations, or specifications for the application.
  2. Synthesize these into a software design. What are you building? What is it expected to do, and what features are optional? How should the functionality be implemented?
  3. Decide which things you are actually going to commit to, and distribute responsibility for these things among the team.
  4. Actually do the development, resulting in a working, QA-tested product.

The difference is that Agile teams navigate these four steps in a matter of 2-4 weeks. Waterfall teams can take months or years to complete the process. This is important because Step 1 is the only time you can respond to changes in the environment, which means that you risk wasting time in steps 2-4 if your information is out of date. This risk increases the longer your loop becomes, such that waterfall teams almost always encounter situations where they have to break out of the cycle in response to changed requirements — or worse, they simply plod on, only to find, 8 months later, that customer requirements have changed, that the wrong features were implemented, or that there is simply no market for the product they created!

Software development isn’t the first profession to realize that only adaptive, iterative processes, which rapidly adjust to feedback, are suited to environments in flux. The Agile Manifesto would make perfect sense (software references aside) to Sun Tzu, who proposed similar ideas two and a half millennia past. More recently but in the same field — military strategy — an Air Force fighter pilot, John Boyd, introduced the same concepts to the U.S. military at the cost of his career. Probably his most famous idea was the "Observe Orient Decide Act loop", or OODA loop, which he presented in his brief called Patterns of Conflict.

The concept of the OODA loop is extremely simple. Wikipedia puts it succinctly in terms of physical action [2]:

  • Observation: the collection of data by means of the senses
  • Orientation: the analysis and synthesis of data to form one’s current mental perspective
  • Decision: the determination of a course of action based on one’s current mental perspective
  • Action: the physical playing-out of decisions

OODA loops aren’t limited to individual action; Boyd believed the concept existed in organizations as well: a single corporation can encapsulate many different, nested loops. Whether the loop is personal or organizational, Boyd’s suggestion is to "get inside" the enemy's (or competitor's) OODA loop. This basically means that if you change the circumstances on the enemy faster than they can observe and react, you can outmaneuver them. If you can get inside their loop, the theory goes, victory will follow. In the military, these principles are generally categorized as Maneuver Warfare.

Does this sound familiar? Agile is maneuver warfare, applied to software development. At least — some of it is. Obviously there’s a little more to the Manifesto than just a speedy development cycle, and a little more to Agile than the Manifesto. Nevertheless, the two are closely intertwined; development in short iterations is probably the central attribute of an Agile development process. So, what can Agilists learn from the OODA loop?

First, it’s absolutely critical to be responsive to feedback at all times but especially after each development iteration. Agile teams need to make sure not to skip the "observe" step — getting customer feedback, examining outcomes, and team introspection are all part of the observations that should be made during the first phase of the cycle. If the cycle is not executed completely, including re-observing and re-orienting after each iteration, the biggest benefit of a short development cycle — responsiveness — is quickly lost.

Second, the OODA loop gives Agilists a way to explain the benefit of having agile teams within a waterfall company, where the Agile "value proposition" is made less clear by company policy limiting incremental product deployment. Even if the company has large OODA loops that progress at a glacial pace — like multi-yearly release cycles — smaller sections of the organization can still benefit from the implementation of Agile. If individual development teams can be freed from the release cycle (or if they directly and regularly interact with customers themselves), implementing Agile will allow them to quickly respond to new customer interests, industry trends, development by other teams, or any of the changes that flux the development environment. Instead of being limited to a long release cycle, the team can quickly turn around with results, potentially improving customer satisfaction or outmaneuvering a competitor. Even if teams can't be freed from the organizational release cycle, using Agile principles can still help the teams adapt quickly to changing internal requirements or other new information.

Indeed, this was Boyd’s key recommendation to the military: that power be pushed to the fringe of an organization, where the OODA loops are smaller. Leaders, he suggests, should train their subordinates extensively and then give them as much decision-making power as possible. A well-trained man in the field can recognize and adapt to changing circumstances much faster than the officers in High Command. Therefore, an organization which trains its operatives to quickly make decisions in response to their observations in the field will be able to outmaneuver an organization which centralizes tactical command.

The Agile correlate: give teams the training, freedom, and exposure they need to quickly respond to changing circumstances or new developments in the field without tying them down in corporate process.

]]>
Luke Turner
tag:blog.luketurner.org,2013:Post/738428 2013-12-30T18:00:00Z 2014-09-09T14:42:16Z Hack language parser in a single regex

Working through Introduction to Computing Systems, I had an opportunity to create a simple assembler for an assembly language called Hack. It is an extremely straightforward language (even for assembly), so a regex can parse it exactly to spec, unlike languages with more complicated grammars. Not only that, but by taking advantage of some extensions — verbose mode and named captures — I can make a parsing regex that isn’t completely opaque, thereby completing the project and having some fun in the process.

For more information about the Hack assembly language or Introduction to Computing Systems, you can visit their website.

But, before we see the pretty version, here’s the parsing regex stripped of all whitespace and comments, since half the fun of regexes are their incomprehensible terseness:

^\s*(?P<instruction>(?P<L>\((?P<Lsymbol>[A-Za-z_.$:][A-Za-z0-9_.$:]*)\))|
(?P<A>@(?:(?P<Ainstruction>\d+)|(?P<Asymbol>[A-Za-z_.$:][A-Za-z0-9_.$:]*)
))|(?P<C>(?:(?P<Cdest>A?M?D?)=)?(?P<Ccomp>0|1|-1|[-!]?[DAM]|[DAM][-+]1|[D
AM][-+&|][DAM])(?:;(?P<Cjump>J(G[TE]|L[TE]|EQ|NE|MP)))?))?\s*(?://.*)?$

And here is the version with full whitespace and comments. This is the longest regex I've ever written (by far).

^
\s* # Whitepace gobbler

# This capture group is non-empty if the line contains an instruction
(?P<instruction>

        # Matching labels of form:
        # (Xxx)
        # This group is non-empty if a label was matched.
        (?P<L>
                \(
                        # Lsymbol group contains label name.
                        # Matches words to Hack spec.
                        (?P<Lsymbol> [A-Za-z_.$:][A-Za-z0-9_.$:]* )
                \)
        )

        # Matching labels of form:
        # @123 or @Xxx
        # This group is non-empty if an A instruction was matched.
        |(?P<A>
                @(?:
                        # Ainstruction group contains instruction target (numeric)
                        # of form: @123
                        (?P<Ainstruction> \d+ )
                        |
                        # Asymbol group contains label symbol
                        # of form: @Xxx
                        (?P<Asymbol> [A-Za-z_.$:][A-Za-z0-9_.$:]* )
                )
        )

        # Matching labels of form dest=comp;jump
        # A=D+M or AM=1 or A=!D;EQ etc.
        # This group is non-empty if a C instruction was matched
        |(?P<C>

                (?:
                        # Contains C destination
                        (?P<Cdest> A?M?D? )
                        =
                )?

                # Contains C computation
                (?P<Ccomp>
                        0
                        | 1
                        | -1
                        | [-!]?[DAM]
                        | [DAM][-+]1
                        | [DAM][-+&|][DAM]
                )

                (?:
                        ;
                        # Contains C jump conditional
                        (?P<Cjump> J(G[TE]|L[TE]|EQ|NE|MP) )
                )?
        )
)? # Note: each line can have 0 or 1 instructions.
\s* # Whitespace gobbler
(?://.*)? # Comment matching.
$
]]>
Luke Turner
tag:blog.luketurner.org,2013:Post/738421 2013-09-03T17:00:00Z 2014-09-09T14:43:08Z Skewomorphism: Second-order mimesis

I asserted for many years that it’s not hard to keep a mental barrier between “reality” and “fantasy.” I'm here today to tell you I was wrong.

I don’t mean to renege on my belief that people who play Grand Theft Auto don’t become inclined to commit grand theft auto, but rather to introduce some depth to the dialogue. 

Everyday decision-making, especially in casual conversation, is primarily driven by mimetic tools and experiential data, broadly assessed in an instant by heuristic thought processes which produce a result that is rarely ideal and only occasionally appropriate. In conversation you don’t have time to think, so you resort to intuition. Post-hoc deliberation often produces a better result: we can usually come up with a good response to almost any situation, sometimes even as early as a few minutes after the scenario flits through our experience. Of course, this capacity is generally worthless, except for embellishing stories and posting on Reddit.

So, suppose you (like me) spend your leisure time engaged in the fantastic: movies, anime, video games, novels, and reading on the Internet. Well, what kind of corpora are you collating in your network of neural circuitry? What cohesive but uncorroborated data will be available to the intuitive heuristic that fires to collect a response to an idle comment? Only that from your experience: the fantastic. Your understanding of a situation is subtly influenced by the fiction.

This is the phenomenon of skewomorphism: when an internal model of reality includes elements derived from the non-real.

If you've ever met a person who "talks like a book," you've run into a skewomorph. If being a reader has such a significant effect on someone's speech, how much greater is the effect from movies and TV, which are primarily dialog? While the outcome might be more socially acceptable in that case, societal rubberstamping doesn't hide the fact that the effect almost certainly reduces your ability to act appropriately by basing your expectations on fantasy instead of reality.

Unfortunately, it's not like I have a solution or anything. In fact, there doesn’t seem to be a handy solution, any more than there is to the problem of physical acuity facing an adult who as a child never fought anybody or fostered athleticism and rarely even challenged himself on the playground. Some things are easily, naturally accomplished by many children, yet challenge adults because we understand expectations and inhibitions. The prime time for gathering data about reality is past.

I guess the best bet is to flip fiction the bird and get some experiences of your own, but that’s obviously easier said than done. As an adult I’m supposed to behave, and I have to live with the people around me so I don’t want to just go off the fucking wall as a form of data aggregation although that is probably a great way to work against the problem presented.

Of course, there’s another side to the story: if one spends long enough in fantasy, one understands fantasy. If one spends time on the Web, becomes adept at navigating cyberspace. I sometimes feel like I belong on the other side of the keyboard, inside the monitor, in the expanse of the network. It’s where I feel comfortable; free to be myself, agile, and experienced. But that doesn’t help me in the real world. The translation is imperfect; the correspondence flawed. Reality and your interpretation are in skew.

]]>
Luke Turner