Book Review – Peak: Secrets from the New Science of Expertise

Hiker standing on the peak of a large rock.

Photo by Kalen Emsley

Peak is Co-authored by Anders Ericsson (a heavyweight in the field of physiology) and Robert Pool (an established technical writer). Peak is a summation of close to 40 years of Dr. Ericsson’s research in elite performance and some the surprising discoveries made along the way. A lot of his research has surfaced in other titles such as:

  • Outliers: The Story of Success By Malcolm Gladwell – Which introduced the concept of the 10,000 hour rule, something derived from Dr. Ericsson’s research. Outliers takes a very different position on the meaning of the results and is interesting to see the contrast.
  • Deep Work  By Cal Newport – A recommended book on the application of many of these concepts in the context of programming and the knowledge worker economy.

But Peak is direct from the man himself and revels quite a few insights overlook by other authors.

Deliberate Practice

This book centers around the concept of deliberate practice, what it is and isn’t, and how its different from how we usually learn. So what is deliberate practice? Let review the key concepts:

  1. Has Specific Goals – Deliberate practice must have very clear constraints and focus. You must dive deep to become an elite performer and goals keep us on track. Veering off costs you time and productivity.
  2. Focused Practice – You need to be completely focused for long periods, no distractions. The authors mention that high level violinists practice so intensely that many take mid-day naps to recover. Most people have to build up and practice to reach the required levels.
  3. Always Uncomfortable– Its important to always be working on the area which you are worst. This seems obvious but usually once we get pretty good at something we stick to habit and stop improving.
  4. Feedback Loop – In order to improve areas which we are weak we need to know where we are weak. Its important that our practice gives us as close to real-time results as possible so we know if we are heading in the right direction.
  5. Teachers – If possible you should have a teacher, coach, or mentor to help uncover your weak areas. They can also show you the proper mental representations that will improve your ability to make complex ideas useful.

Mental Representations

One of the concepts discussed is the idea of mental representations. The key idea behind this is that we can generalized concepts once they are intuitive to us and build a hierarchy of knowledge.

For instance, the example of a study participant who worked his way up to memorizing about 80 unique numbers. At first he just tried memorizing the numbers but was limited to around 7-9 numbers. Which is the usual limit of short term memory.

By creating memorized representations like robot is 42 or cat is 68, simply picturing a robot picking up a cat could generate the number 4268.

We humans are better at recalling stories and images (chess-masters sometimes refer to picturing the board as lines of force for example) rather than raw information. Meaning a large part of building knowledge involves finding the best way to represent a complex topic as a simple abstraction.

Things need to be practiced and mulled over till it seems obvious. Concepts made intuitive integrate easy with the rest of your knowledge. This web of knowledge allows you to make new and unique connections that is seen in expert performance.

Final Thoughts

Peak: Secrets from the New Science of Expertise is an excellent book. I would say it’s required reading for anyone in a technical field.

Unfortunately, if you are looking for step by step advice you wont find it.

But it does do a great job giving everything needed to classify deliberate practice, helping to qualify whether you are actually performing it. Figuring out the steps ultimately requires advice from experts as well as experimentation.

If you are a developer you may find it difficult to find truly great teachers. The good news for those starting out, you just need someone who is good at explaining things and better than you. But as you progress, you will continue to need better teachers.

Something Cal Newport brings up quite a bit is that few people know or implement this stuff. That means if you can figure it out you can enjoy a huge advantage over your peers. Its fertile territory for those who want to blaze a path since most knowledge work has yet to reach the structure of sports or music.

If you are interested you can pick up Peak: Secrets from the New Science of Expertise from Amazon fairly cheap, the audio version is also great.

If you enjoyed the read or have any comments be sure to follow me @zen_code and let me know.

Do Vim Plugins Improve Productivity?

View on the long journey

Photo by Aneta Ivanova

 

I’m usually open to experimentation when it comes to productivity in my development work.  I recently stumbled upon an article advocating Vim bindings for Visual Studios and decided to take the plunge and give it a try for at least a week. I had a little Vim experience so I figured I wasn’t flying completely blind. The idea of being able to work while never moving my fingers away from the home keys was quite appealing to me. I figured that alone would be a workflow improvement, not to mention the other wealth of features.

Sensitive users may want to skip this next statement. I know some of you are thinking, isn’t Vim only for people who still call themselves AMIGA programmers. Why should we be moving backwards. My honest answer, I don’t know. But I have spoken with others and read quite a few post which have given me a good enough argument to at least find out for myself.

Deciding to go full immersion, I rebound all of my IDE’s and dug in. Boy, did I have a rude awaking as to how polished my Vim skills were.

First There was Despair

My first few days were polar opposite of productive. I found myself grasping for my cheat sheet of Vim modes and commands for what felt like every 30 seconds. Struggling and having to look up a command you literally just looked up is a good test of humility. More than a few times I had to turn off plugins to get important work done quickly. Cheating, I know.

But as I persisted I improved. Getting better at basic things leads to exploring the more difficult things. Knowing the light at the end of the tunnel was I would be gaining a useful skill regardless was helpful.

Its also worth briefly discussing why I decided to just use plugins for other IDE’s rather than Vim itself. The truth is I have Visual Studios and Sublime tuned in for the type of projects I’m currently working on. Does that mean I shouldn’t explore using it in my workflows, sure I should and will, but I didn’t want to bite of more than I could chew. This does come at the cost of losing some of the, some would argue, best features.

What was Gained?

So I know I’m going to leave out someone’s favorite feature here. But these just some of the features I found to be useful early on and is in no way meant to represent the full breadth of features these plugins have.

  • The base key bindings are all very close to the home keys. No reaching for the mouse required if you set things up right. Once the keybindings become second nature text just seems to flow like a river. No need to look down to jump to and end of a document or move the cursor with arrow keys.
  • Macros, they are a huge advantage when doing very repetitive tasks. The fact you can record a set of keystrokes quickly and then replay them is handy when working in HTML.
  • Similar to macros but rather than recording keystrokes you can command and execute a pattern of keystrokes such as jump 4 words or down 10 lines. What neat is even those can be stacked.

Final Thoughts

So the big question, was it worth the struggle? Yes, I think so. It was far from easy but I did in fact find that it improved my input speed. While far from being a matrix like neural plug, its kind of weird how it begins to feel like you can almost think it and it happens. Things require such little movement its almost hard to explain. But as its joking called, the Vim learning cliff is not easy to scale and may be worth easing in a bit more slowly than I did. I am still far from smooth and still find myself regularly looking things up. There is so much nuance available it could take years to master, which is kind of exciting for me. But if you interested in following suit here is a list of plugins. You can find one for pretty much any editor.

Other Useful Resources and Links

  • Sublime Six – Surprisingly broad support for Vim, worth checking out if you use sublime.
  • VsVim – Adds support to Visual Studios. A bit limited on features compared to others but has all the important ones.
  • vim-adventures – Learn Vim as a game if that is your learning style, it is paid but you do get the first three levels for free.
  • Cheat sheet – Don’t be surprised if you clutch it like a life preserver at first of course this is the one I used but there are plenty of other good ones that my be better formatted for you.
  • Learning Goodies – A wonderful list of helpful links to help you on your learning adventure.

It you decided to take the journey be sure to let me know @zen_code.

K-Means What? A Less Bewildering Introduction

Today my hope is to give a less bewildering introduction to one of the cornerstone unsupervised learning, K-Means. The only expectation will be some programming experience and a passing understanding of Python. If you are already a rock star machine learning developer then you will likely know all this like the back of your hand. As for everyone else buckle up.

So, quick refresher of some machine learning 101. There are two primary types of learning algorithms: supervised, and unsupervised. Supervised algorithms validate their estimations while learning by correcting themselves after looking at supplied answers. By supply the answers this allows the algorithm to model data in a way specified by its creator.

Unsupervised algorithms simply require data with enough inherent context for them to unravel a pattern. This is handy since one difficulty in machine learning is labeling data to get an algorithm to learn to fill gaps and generalize accurately. But it does come at a cost. Unsupervised algorithms will not be able to label and categorize as straightforwardly as supervised learners. This is due to the inherent context that labels give data. For instance, it would not be possible to give an unsupervised algorithm trained with animal data points a dog and expect it to directly output the category dog. But it will likely throw it in the same category as other dog data points, maybe wolfs as well etc. Unsupervised learners real strength is for finding patterns we are not looking for.

So, let’s get started.

K-Means Basics

We are also going to use the variable k denote the number of categories we want the algorithm to split our data into.

Let us also call the center (x,y) point of a category in our graph its centroid. More on how this works shortly. We will assign each data point x_n will always be assigned to its closest centroid.

For those who (like myself) slept through math class, lets quickly talk about finding the distance between two vectors. Which is formally noted as:

\left \| \vec{x^n} - \vec{\mu^k} \right \|

Which broken down looks like:

distance = \sqrt{(x_{2} - x_{1})^2 + (y_{2} - y_{1})^2}

For those a little rusty on there euclidean geometry here is a simple explanation as to how distance is derived.

Optional Nerd Knowledge

I’m sure, like most of us, you’re wondering why not just use cosine dissimilarity or some other form of distance. Well in truth there are variations to k-means which do calculate distance with other methods.

Here is a brief explanation as to why euclidean distance is used as well why we square the distance, as you will see below. The short, smarty pants, answer goes as follows “.. the sum of squared deviations from (the) centroid is equal to the sum of (the) pairwise squared Euclidean distances divided by the number of points”. But mostly, it saves us from taking the square root of the difference between vectors.

Step Through

Lets quickly walk through the algorithms steps. But first lets randomly initialize the x and y positions of K number of centroids.

1.) We now check each for the one with the shortest distance squared between and assign each point to its nearest centroid.

min_{k} \left \| \vec{x^n} - \vec{\mu^k} \right \|^2

2.) Now that we have updated all of our points, lets updated our centroids. Each centroid now moves to the average of all the newly assigned points. Which is done by adding up all the points in each cluster and dividing by the count.

K_{i} = (1/C_i)\sum^{C_i}_{j=1} x_i

Now all that is needed is to repeat steps one and two by either a fixed amount of say 100 iterations or perhaps check to see how much the centroids have moved. When it stops moving any significant amount, stop the loop.

Now that wasn’t too bad, was it? Lets see a bit of sample code to demonstrate it in practice.


def assign_clusters():
    for x_indx, x in enumerate(points):
        min_distance = sys.maxint
        min_category = 0
        
        for idx,centroid in enumerate(k):
           # using built in vector function for distance to make simple
           distance = x[0].dist(centroid)
           
           # is it closer? if so lets make it our category
           if distance < min_distance:
               min_distance = distance
               min_category = idx
        
        # update point with new category
        points[x_indx] = (x[0], min_category)
        
def update_centroid():
    for idx,_ in enumerate(k):
        sum = PVector(0,0)
        count = 0
        
        # sum the vectors
        for p in (item for item in points if item[1] == idx):
            sum.add(p[0])
            count +=1
        
        # bad things happen when you divide by zero
        if count == 0:
            continue
        
        # normalize to the average position of all points
        sum.div(count)
        k[idx] = sum

Advantages

  • Its simple, fast, simple to understand, and usually pretty easy to debug.
  • Reliable when clear patterns exist.

Disadvantages

  • It can get stuck in local optima, which usually requires re-running the algorithm several times and taking the best result.
  • It won’t detect non linear clusters.

K-Means doesn’t like non linear data –   🙁 [source]

Final Thoughts

Hopefully you have found this to be useful, if not or should you have any questions be sure to let me know on twitter @zen_code. Of course this article is a pretty elementary description of what K-Means can do. For those looking for a bit more, or would like to see some interesting applications. If you interested in using this on a large or production data set, what ever you do don’t right you own. Sci-kit learn has an excellent k-means tool set. I’ve included a link to the full code as well additional resources below and as always happy learning.

Summary of DeepStack: Expert-Level AI in Heads-Up No-Limit Poker

Below is an excerpt of some of the research I have been doing for a Udacity Course I am taking on AI. It’s been a lot of fun digging into the merging of logic and machine learning. I’ll likely post a bit more details and perhaps a review once complete. But for now, here is a summary of some pretty cool research coming out of the University of Alberta.

https://arxiv.org/pdf/1701.01724.pdf

Imperfect information games provide a significant challenge as compared to deterministic games. Since limited information is available to the agent, the agent must calculate a statistical best choice(s). Formally the statistical best response is accomplished by approximating the Nash equilibrium given the private information held by the opponent. DeepStack builds upon the traditional technique known as counterfactual regret minimization, which is a recursive algorithm which pre-computes the probability distributions. The issue with this method is that the computational requirements for Heads-Up No-Limit Poker (HUNL) approaches 10160 therefore approximates to game states would have to be made. A goal of DeepStack was to eliminate as much contextual information loss as possible.

When player a two-player zero sum game such as HUNLor goal is to maximize the expected utility against a best response strategy. Deep Stack calculates this through 3 elements. The first generates a representative picture of all currently available public information. Public information in the case would be the agents two cards in hand, the pot, and any visible cards dealt to the table. The second component is a depth limited search technique which uses a trained neural network to quickly evaluate values below a certain depth. By using the flexibility of neural networks to generate values, the depth needed to search can be greatly limited. Limited set of look-ahead branches. By limiting to a known set of optimal actions, the recursive traversal computational cost can be reduced.

The internal representation of all public information is regenerated each action by a process the authors call “Continual re-solving”. This is done with two values, obviously, the first required values are the public state. Secondly, a vector of counterfactual values of valid “what if’s” relevant to the opponent. The counterfactual values start with a default starting state for the first round. Afterwards the vector is updated by output of the recalculation.

In order to make re-solving practical, the depth of the search is limited to four actions. Values are calculated through a neural network the authors call “Deep Counterfactual Value Networks”. Given the complexity created by imperfect information, each vector must contain what amounts to a description of the poker game being played. Concretely, a representation of the probability distribution of the cards dealt to the private hand and a value of the current knowns.  The authors liken this output to “Intuition”. The network itself consists of seven hidden layers with 500 nodes each. Trained on 10 million randomly generated poker games for the turn and 1 million for the flop.

DeepStack has shown significant improvement over previous systems. Competing against various professional human players. Playing a total of 44,852 games. DeepStack won 492 mbb/g. Which the authors indicate as over 4 standard deviations away from zero. In comparison to previous algorithms such as Claudico(2015) which lost at a rate of 91 mbb/g. DeepStack shows a novel approach to relegating to neural networks strength in determining probability distribution over a wide range of outcomes. This system which demonstrates the possibilities in the future of AI in regards to solving problems with imperfect information.

Does it Function?

Thinking about Refactoring Functions

Photo by: Gaby Av

What are the ideal properties of a function and when should you refactor a functions? Or from another angle, when is the best time to refactor a function out, restructure within or just leave it? In the last week or so I’ve noticed an interesting pattern in functions written by some slightly novice developers. I got to thinking about how to best give constructive feedback. But something I found interesting about all of is that the structure of the functions, at least on the surface, seemed to meet all the traditional best practices of a function. They performed a single purpose, were created to avoid repetition, etc. But the code was awkward to work with and functions almost obfuscated rather than simplified. So what attributes made them difficult to work with and kinda smell of code rot? So I thought I would list out some further considerations when breaking a functions out.

Functional Mutation

What the heck is functional mutation? Well I did just make the word up but it seemed appropriate. Usually, this is a sneaky violation of “functions should perform a single purpose”. It happens when a function’s output varies unpredictable from what can been gathered from the function’s name and its inputs. This is usually caused when the code inside the functions varies output wildly due to non-intuitive conditions, at least as seen by the outside observer. It really pays to leave conditional logic which impacts the straightforwardness of a function in a higher level of code. If that just doesn’t seem reasonable then usually that’s an indication that it’s to rethink the structure to match the conditional lines.

Sequential Dependence

When a function can not perform its required task due to its dependence on another function it’s usually bad. There are exceptions to this one, especially when designed something that interface with an external system. But, it’s very important to ask whether the dependence can be removed by either restructuring the input parameters, restructuring functions, or both. The problem with sequential dependence when it comes to maintenance is that it hurts code readability, and locks it any changes to external dependencies. So ask yourself twice if there is a better way if you find a set of function like this.

Sometimes it’s better to not refactor a function out unless it stand on its own. But be sure to remember that the person that comes behind you, which is usually you 6 months from now who has totally forgotten everything, won’t know your ideology while they absorb it purpose. So do yourself, and them a favor by writing code which speaks to them rather than having to be strangle out.

An Argument for the Jack of All Trades

Is it true that a person who has chosen to remain good at everything rather than excellent at one thing has made themselves noncompetitive? Does the old adage “jack of all trades, master of none” still ring true today or has it really ever? This is something I have given quite a bit of thought to and have wondered if my desire to roam and explore topics has in fact put me at some disadvantage. I suspect in some regards it has, but why and should that be the case? Clearly I don’t believe completely believe that are this would be a rather short post.

Once upon a time, in an age which things where hand crafted. It took a great deal of time and effort to create even the simplest objects. A wooden chair would take days, cathedrals could take centuries. Their creators only hopes of making a sustainable living at their trade was to be quick at making repetitive hand shaped items. To learn was to apprentices along side someone who had dedicated there lives to the subject. Little was shared without it being spoken and shown since books where rare. But progress changed that and suddenly one could learn without being shown.

Things have obviously continued to change since then, knowledge has proliferated. Conversely so has the reduction of manual task. Machines perform the repetitive tasks for us leaving us time to think of new ways to improve things. This changes the emphasis on being good at a singular task but rather an ever expanding array of higher level tasks.

The present is driven by the consumption and application of new information and a few key areas of rarely changing knowledge. Once you learn the key areas you can pivot much easier than the past. Though specialization is by no means obsolete and never will be. But perhaps having a wider understanding is as important now as a narrow one. Pulling ideas from a big bucket with many tools will always yield more progress than a small bucket with narrowly focused ones.

Anyways, food for thought in a age where specialization has grown rather than shrunk. But maybe its not so bad being a jack of all trades.

Raspberry Pi Interrupts – An adventure with vector interrupt controllers

Today’s topic is about the Raspberry Pi Interrupt controller. I know what you’re thinking but its not as bad as it sounds promise. So, I’ve been goofing around with my ultra sweet Raspberry Pi and in doing so had this wonderful idea to just roll my own OS. You know average weekend project stuff, right? Anyways my work on RazOS has just begun and most likely will continue for some time. Though so far its been awesome, I have encountered many a roadblock along the way and thought I should share a few in hopes of saving someone else from the pain and suffering of sorting though Linux source as well as the little tidbits that are scattered about. That said, if you’re interested in OS coding I’m planning a series to come which should provided a bit more back-story. I should mention for those interested in learning something cool that this is should give you a bit of the detail you will need. So if you have no idea what a Vector Interrupt Controller is then stick with me I’ll bring you up to speed, if you do know then you may want to skip the whys and go to the core of the article down below.

What is a Vector Interrupt Controller

So Wikipedia‘s definition goes as follows “An interrupt vector is the memory address of an interrupt handler, or an index into an array called an interrupt vector table that contains the memory addresses of interrupt handlers.” Not particularly enlightening, good news is it is much simpler than its made to sound. To get started I think its best to quickly go over interrupts in general. The purpose of interrupts are simple, lets imagine for a moment you are a accountant. A pretty good one at that who can master even the most obscure of tax codes and spreadsheets. I know bear with me. Unfortunately your assistant is well.. not quite all there. When he has new information to give you or that spreadsheets you asked for instead of knocking on your office door he just stands there. I guess he is just hoping you will think to answer. Of course if you don’t quickly enough his ADD may get the best of him and just wonders off, so your forced to constantly get up and check the door to see if he is there. Now its pretty easy to see you wouldn’t get much done having to check the door all the time no matter how good you are. So whats the most simple fix to this?  Probably just to tell him to knock instead of standing there! That is exactly what an interrupt is, a knock.

Unfortunately our CPU is a bit dumber than our accountant so we have to be a little more straight forward with our decision making but the metaphor is still quite applicable. But the question lingers still, what is a vector interrupt controller? In short a vector interrupt controller is for when we have more than one door which is almost always the case.  In the case of the Raspberry Pi it can visualized with one buzzer and you have to look up which door it came from but we will come back to that in more concrete terms.

Raspberry Pi Interrupt Controller

To began lets look at this bit of code from a great example provided from David Welch.


.globl _start
_start:
ldr pc,reset_handler
ldr pc,undefined_handler
ldr pc,swi_handler
ldr pc,prefetch_handler
ldr pc,data_handler
ldr pc,unused_handler
ldr pc,irq_handler
ldr pc,fiq_handler

reset_handler: .word reset
undefined_handler: .word hang
swi_handler: .word hang
prefetch_handler: .word hang
data_handler: .word hang
unused_handler: .word hang
irq_handler: .word irq
fiq_handler: .word hang

reset:
mov r0,#0x8000
mov r1,#0x0000
ldmia r0!,{r2,r3,r4,r5,r6,r7,r8,r9}
stmia r1!,{r2,r3,r4,r5,r6,r7,r8,r9}
ldmia r0!,{r2,r3,r4,r5,r6,r7,r8,r9}
stmia r1!,{r2,r3,r4,r5,r6,r7,r8,r9}

irq:
;@ What gets called if a IRQ event happens

This is an example set of one possible way to initialize your vector table but I find it to be very straight forward for illustration purposes. This code’s job is to put the addresses of the subroutines which will be called automatically by the processors hardware should any of the events be triggered as well as enabled. By default the hardware assumes that the first eight words of memory are address to subroutines, but it can be changed to the last eight words should you wish it. You can find out how here, get familiar with it many arm specific questions can be answered with it. Since the Pi expects and runs your code at 0x8000 we must relocate the table to 0x0000, which is what the remainder of the code does for us.

Since we are only interested in interrupts today I’m going to ignore all but the IRQ calls, though it should be noted that FIQ is essentially the same with a few perks to speed up interrupt handling. You can find more on the subject in the ARM Manual.

Who knocks?

So the final part of our interrupt adventure comes in finding which door the knock has occurred. But before we get any knocks we have to enable IRQ, this is a two step procedure. First of which we must turn on the “master IRQ switch” so to speak which is located outside in a coproccessor. Once again code courtesy of David Welch. Note the mrs and msr calls.


.globl enable_irq
enable_irq:
mrs r0,cpsr
bic r0,r0,#0x80
msr cpsr_c,r0
bx lr

Once we have enabled IRQ as a whole them we must enable the individual “Doors” which we wish to hear knocks from. From this point each sub component handles the the details, so you must refer to each individual component for the details of how it triggers an interrupt.  It is worth mentioning that most of the interrupts are not handled via the ARM processor unfortunately since technically it is the secondary processor in the Broadcom BCM2835, the undocumented Videocore GPU is the king on this chip. But the ARM does have access to all of the primary ones.

The subsystems can be found here. One big point to note about the peripherals document. Unless you have the MMU enabled and mapped as stated then almost all address in the guide must be translated from 0x7Ennnnnn to 0x20nnnnnn. For example the guide states that the interrupt begins at register is 0x7E00B000, NOPE! It is in fact at the physical address of 0x200B000, so keep this mind. Since this is running a bit long I’m going to end it here, a follow up to come.

Important Further Reading:

To Engineer: My Design Philosophy

I have a habit of calling myself an engineer. You could say well like an engineer that build my favorite clogged turnpike or that blender that lasted three days? Well yes and no, but rather than argue semantics lets first take a look at the origin of the word.

The word engineers roots go back to the roman word ingeniare(“to contrive, devise”) and ingenium (“cleverness”) per wikipedia. If your Spanish is up to par they should also seem a bit familiar to you. One thing interesting to note is that the English word engineer also shares an origin with the word engenders(“To arise from”). This particular framing of the word gives me particular pause for though as my job as a programmer requires a great deal of attention to that ethos.

Ethos? Yes I believe so. I believe that a more accurate definition of the word engineer, despite the modern context, is the title of someone who’s core mastery / profession is to contrive and create tangible, useful devices that move the world forward. An engineer derives these constructs through cleverness and intellect. Now I use the term devices loosely, this can range from the physical( a phone, turnpike, computer processor) to the metaphysical(Concepts, ideas, thoughts). Now it is important to included thought-space in our definition because physical derivation must first be derived through thought. This is perhaps the biggest departure from the traditional definition.

How its useful

By considering this we can also use the word engineering to define a philosophy. In its simplest form this philosophy is the philosophy of creating things through cleverness and creative use of known / discovered principles. As a software developer I have found that it is important to approach problems by providing myself a label that extends beyond the traditional word software developer of X. This acknowledgement that in fact I’m engineering a solution in a greater sense allows myself to act outside of the normal barriers the we impose on our minds. I instead have now become a problem solver, only limited by my means of my own imagination and astuteness to research and learning.

Further Reading

The Executioner: the JavaScript execution context and how to defeat it

JavaScript as the dynamic and versatile language it is provides us with several non intuitive points to be weary of while constructing our code. But before we dig too deep into the issues and their solutions lets review a few facets of the JavaScript’s interpreter.

Execution Context

[Note: also sometimes referred to as Activation Object, Scope Object, Variable Fairy]

Death

Not as scary as it sounds, I assure you. To understand its purpose we need to merely break down its very name. Execution Context, it is the manager of the context of the code that is currently being executed. Quickly, lets take a look at what the execution contexts properties are.

var executionContextObject = {
variableObject:{},
scopeChain:{},
this:{}
};

  • VariableObject – Simply the list of variables/arguments which may be accessed in the function. More on this in a moment.
  • ScopeChain – All the execution context objects containing of all the parent functions.
  • This – the value associated with the current this

Variable Object Array

Lets take a moment to peel back the covers and understand how the variable object works. Now at the point this is happening the interpreter has reached a new function and is creating the execution context for us. These are the steps it will take while creating it.

  1. Look at the inputs||arguments of the function and make an arguments array out of them.
  2. Find each function and and save the pointer to it, if it finds a matching pointer overwrite it with the newest value. Stick results into the Variable Object array.
  3. Find each variable with a var and initialize it to its line assigned value (var me = ‘Ben’) or default which is the “undefined” value. Stick results into Variable Object array.

This array is what lets us do things like this.

var mySweetObject = {
color: "Red",
magicalAbility: "telekinesis"
};

//Normal way to access a property
var color = mySweetObject.color;
//Using the Variable Object Array
var magicalAbility = mySweetObject["magicalAbility"];

The Execution Stack

Now that we understand a bit of how the execution context looks lets take a look at the execution stack. The execution stacks job is to keep track of whats being executed as well as its associated execution context. We can think of it simply as a stack which items are placed on top as they are encountered or LIFO if you prefer that verbiage.

Created By David Shariff

[Image created by David Shariff]

Hoisting

You hear a lot about hoisting, but armed with the above you can tell all of your friends about how simple it is, properly asserting your intellectual superiority over them. Lets look at an example.

console.log(typeof funkyFunction ); // function pointer

var funkyFunction = function(){
return "awesome";
}

Wha, bit odd that myFunkyFunction seems to have a value as though it was declared before its var. But we know that the variable object array is created before the code is executed and therefore is the any inside the scope of this function are created before execution. (Side Note: this does not apply to variables applied to child function)

Scope Chain

Like mentioned before the scope chain is an array which contains all parent execution contexts. Its purpose is to store the location of all variables that the current function can access , this is to facilitate JavaScript’s external variable storage which is represented as lexical scoping.  Lexical scoping simply means functions are stuck with the variables of their parent functions even if called from another physical location.

Use local variables

Now considering the impact of all our variables being stored in physical order, also it is then given they must each be resolved every time a non local variable is utilized. This is a very good reason to be weary of the global scope since it must always be contained in the scope chain. So lighten up on those global declarations, it can save you on bigger JavaScript projects. I have seen scope variables easily reach over 100,000 in big projects, this meaning a serious impact on scan times when the interpreter must resolve variables. But of course use of local variables avoids this entirely, allowing the interpreter to remain in the variable object array.

P.S.

Don’t forget that lack of a var keyword when declaring a variable always lands that variable into the global scope, this includes functions.
Thanks for reading!

Further Reading

SQL Voodoo Query: Merging a column in multiple rows into just one

[This is an old but regularly requested  post of mine from BIDN in 2011, updated due to my renowned grammar]

Today I wanted to quickly mention a technique to pivot a one to many data structure into a one column list of items. I know it has been mentioned before a bit differently in previous blogs before on BIDN.

The problem?

We have three tables

InstructorsInstructorClassesClasses

In the instructors table we have the instructors name, ID, and yada yada. InstructorClasses is our junction table and links all of the instructors to their list of classes and finally classes is our simple list of classes.  Now assuming we have a web page that needs to show them as below?

Instructor: Jim Bean

Classes: Rowing; Yoga; Bartending;

Now assuming you are unable to format in code or perhaps you are archiving data and needing to convert it during ETL. To get the expected results we can utilize the XML Path command in SQL Server to provide us a pivoted view of classes as demonstrated in the example below.

SELECT Left(Main.Categories,Len(Main.Categories)-1) as Classes, Instructors.FirstName + ' ' + Instructors.LastName as InstructorsName
FROM Instructors
left join (Select distinct ST2.InstructorID ,
(Select distinct Classes.ClassTitle + '; ' AS [text()]
From InstructorsClasses ST1
join dbo.Classes
on ST1.ClassID = Classes.ClassID
Where ST1.InstructorID = ST2.InstructorID
For XML PATH ('')
) [Categories]
From InstructorsClasses ST2) [Main]
on InstructorID = main.InstructorID