All Things CC:

All things Commuication & Computing….

Docker: Introduction and How-To Access natively from Mac OS

leave a comment »

I have been exploring Docker technology for projects leveraging Node.js. My experimentation was all on my Retina Mac Book Pro – running Yosemite. Docker is a relatively new breed of virtualization technologies known as Containers. A commonly used analogy for Docker is to compare it to actual real-life containers or lego bricks: it provides a fundamental unit, and with it a way for an application to be portable and moveable, regardless of hardware.

Here is a quick snapshot from the “What is Docker?” page:

VM vs. Containers

“Docker” encompasses the following:

  • Docker client: this is what’s running in our machine. It’s the docker binary that we’ll be interfacing with whenever we open a terminal and type $ docker pull or $ docker run. It connects to the docker daemon which does all the heavy-lifting, either in the same host (in the case of Linux) or remotely (in our case, interacting with our VirtualBox VM).
  • Docker daemon: this is what does the heavy lifting of building, running, and distributing your Docker containers. (Refer to the “Docker Engine” in the diagram above)
  • Docker Images: docker images are the blueprints for our applications. Keeping with the container/lego brick analogy, they’re our blueprints for actually building a real instance of them. An image can be an OS like Ubuntu, but it can also be an Ubuntu with your web application and all its necessary packages installed.
  • Docker Container: containers are created from docker images, and they are the real instances of our containers/lego bricks. They can be started, run, stopped, deleted, and moved.
  • Docker Hub (Registry): a Docker Registry is a hosted registry server that can hold Docker Images. Docker (the company) offers a public Docker Registry called the Docker Hub which we’ll use in this tutorial, but they offer the whole system open-source for people to run on their own servers and store images privately.

Docker is a Linux technology, so it doesn’t run natively on Mac OS X. So we need a Linux VM to run the containers. There are two ways this could be done on a Mac OS X:

  • Use Docker’s Mac OS X approach. This involves using Boot2Docker – that has a built-in Linux Machine. This is straight-forward and well described. The boot2docker tool makes this about as easy as it can be by provisioning a Tiny Core Linux virtual machine running the Docker daemon and installing a Mac version of the Docker client which will communicate with that daemon.
  • Use a VM such as VirtualBox or VMWare Fusion – run Linux as the Guest OS in the VM. I will be covering this approach – there are several blog posts covering doing it with VirtualBox. The challenge with this approach, unlike, the boot2docker approach – is that all Docker commands need to be handled in the by logging into the Linux VM. This post gets into how to get native-ish experience on your Mac but you’re running a Docker host that doesn’t work with boot2docker specifically the Docker host is running on Ubuntu Trusty host running as a VMWare Fusion Guest OS.

The Basics:

1. I have a Ubuntu 64-Bit (Recommended for Docker) 14.04.02 Server (Command Line Only) installed as one of the Guest OSes under VMWare Fusion. I am using VMWare Fusion 7.1.2.

2. Install Docker following instructions from the Ubuntu/Linux section. There are prerequisites that need to be taken care of if you are running something other than Ubuntu Trusty 14.04. The install primarily involves the following command (details in the link above):

$ wget -qO- | sh

3. Our goal is to run Docker commands from Mac OS X terminal. The trick is to know that the Docker client does all of its interaction with the Docker daemon through a RESTful API. Any time you use docker build or docker run the Docker client is simply issuing HTTP requests to the Docker Remote API on your behalf. So we need to open up a “communication channel” and making the API to the Docker host in the virtual machine accessible to Mac OS X. Two steps are involved:

– Edit the /etc/default/docker file in Ubuntu Trusty to the line shown below

DOCKER_OPTS="-H tcp://″

and then restart the Docker service:

$ sudo service docker restart

4. The following command from the Ubuntu OS won’t work now since Docker access through a TCP Connection:

$ docker ps

This on the other hand should work:

$ docker -H tcp:// ps

5. Now we need to forward the IP Address / Port so that the Docker commands can be issued from Mac OS X. This involves configuring the VMShare Fusion NAT Settings for the Ubuntu Trusty Guest OS. We know we need to forward Port 2375. In addition we need to get the IP Address of the Docker engine or Docker Host running in Ubuntu. The way to get that is to run the command

$ ifconfig

This will dump the networking settings – look for docker0 – and note down the IP Address. In my case it was This is important to understand. We are NOT looking for the IP Address of the Guest OS or the eth0 of the Guest OS but the Docker Host IP Address.

Next run ‘ifconfig‘ on the Mac OS X terminal, and getthe IP Address or rather the Adapter name assigned to the Ubuntu Guest OS. I have two Guest OSes installed – Windows 8.1 and Ubuntu Trusty. When I run ‘ifconfig’ command – I see a number of networking interfaces – and two that are labeled ‘vmnet1‘ and ‘vmnet8‘. The way to ensure that you pick the right interface is to look at the IP Address assigned to the ‘vmnet‘ in Mac OS X, and look at the eth0 interface in the Ubuntu Trusty. Pick the one that is on the same subnet, here is what I had:

vmnet8 on Mac OS X IP Address:
eth0 on Ubuntu Trusty Guest OS running VMWare Fusion:

Now that we know what instance we have to deal with for the VMWare Fusion – we need to modify the nat.conf for the right instance, I will use vmnet8 as the example:

$ Mac OS X> sudo vi /Library/Preferences/VMware\ Fusion/vmnet8/nat.conf

In this file look for the [incomingtcp] section, and add the following:

2375 =

The format of this section is <external port number> = <VM’s IP Address>:<VM’s Port Number>. Note in the above example – the IP Address used for the VM’s IP Address is actually the address of the docker0 interface provided in the ifconfig command when run in Ubuntu.

6. We are close now. Make sure that you have brew on your Mac OS X (if you are developing on Mac, you probably already have it!). Install the Docker client in Mac OS X using (you :

$ Mac OS X> brew update 
$ Mac OS X> brew install docker
$ Mac OS X> docker --version

The last command should tell you if Docker was installed correctly and the version number. Assuming you have installed Docker on the Ubuntu Guest OS and Docker client on Mac OS X using brew minutes apart – you would have the latest version and they would be the same. The versions have to be identical for this to work.

7. Now you are set to access Docker commands from the Mac OS terminal, run the same command as above

$ Mac OS X> docker -H tcp://localhost:2375 ps

and it should work. The -H flag instructs the Docker client to connect to the API at the specified endpoint (instead of the default UNIX socket).

To avoid typing the flag, and the endpoint followed by the command add the following export, and you can access all Docker commands as you would natively.

export DOCKER_HOST=tcp://localhost:2375

Docker Away!


Opening/forwarding ports on the OS is NOT recommended on Production Environments. This is merely to ease development effort if you are doing it on your own Mac Book.

Written by Ashu Joshi

July 17, 2015 at 1:25 pm

8 Attributes of a Full Stack IoT Company

leave a comment »

Chances are, if you are in the business of technology, you have come across the term “Full Stack”. It started few years back with the notion of a (software) developer being able to program all layers – you can find a great description from 2012 here: What is a Full Stack developer?. The momentum, and market for Full stack developers kept going up. And then it started being discussed at the level of a company or rather a startup – and I would attribute it to the VC firm A16Z to define it at this level. A16Z has dedicated a page on the Full Stack Startup as a part of their ‘trends‘ in investing. The ideal full stack company is Apple – they do everything top to bottom, providing the best user/consumer experience to their customers (readers may be familiar with the previous iteration of full stack – the Vertically Integrated Company – I’m not sure how it is different from the definition of Full Stack).

This got me thinking on what are the attributes, skills, expertise needed by a Full Stack IoT Company – a company that build a complete solution based on the benefits of IoT, and here they are:

1. Hardware design, development and manufacturing: This may or may not be part of a full stack IoT company but the reality is that the T in IoT stands for “Things” – physical things. And interfacing with them requires hardware. Full Stack IoT companies will own or control significant aspects of the hardware required in their solution. This implies the Full Stack IoT company needs skill around design, development of hardware. And as many delayed Kickstarter hardware projects prove – these companies would need expertise and experience in manufacturing, and supply chain.

2. Embedded / Firmware (resource-constrained) Programming: IoT has re-surfaced the lost art of an embedded programmer. Imagine the code running inside the wearables or sensors – it is all embedded code and in some cases running without any real operating system. Design, development and debugging is fundamentally different than cloud or mobility or application level programming.

3. Application-level & Middlware )rogramming

4. Cloud development, and operations

5. Management – of devices, of applications, of network (even though the network belongs to a third party): as the solution provider – managing devices, and apps – for version control, updates would be needed as a part of the integrated offer and solution. Once again – the Full Stack IoT company may license or integrate a third party solution but would own the responsibility.

6. Smartphone & Tablet Apps: Do I really need to justify this?

7. Analytics, Mining, Business Intelligence: ideally the company provides basic analytics with its solutions, and can be integrated with other products and solutions for advanced analytics. Full Stack IoT companies may leverage analytics solutions from other companies, but would still retain control of the data.

8. Integration with IT systems – to interface, and integrate with business applications in order to deliver the contextual value provided by the IoT system



Full Stack Reading

The Rise And Fall Of The Full Stack Developer

A16Z’s Ben Horowitz did an interview (in Jan 2014)  talking about Full Stack Startup


Written by Ashu Joshi

July 12, 2015 at 7:49 pm

IoT Click Bait: Be Wary of Investing Advice

leave a comment »

I am generally accustomed to Click Bait especially related to Internet of Things – after all it has been the buzzword in the Technology industry for the last few years:

Screen Shot 2015-06-27 at 1.25.45 PM

However I was surprised to find investment advice being provided at a reputed site like It is titled: “Intel Corporation’s New Internet of Things Chip Looks Like a Powerhouse“. I read through the entire article, and found the following justification for why the new chip is an “IoT Powerhouse”:

I suspect these new chips should help Intel further grow its already rapidly rising Internet of Things business, as the substantially better performance in both graphics and computing should be quite attractive to potential customers.

Clearly the Click Bait worked because it made my weekly Google Alert – it probably has enough shares or enough page rank that it made it my list of weekly alerts.

Now don’t get me wrong – Intel is investing heavily in IoT, and all the news indicates that they are serious about IoT. What I am taken aback is that the article is encouraging readers to invest on the basis of this new chip – indicating that it will change the “game” but no real evidence on how the attributes of this chip are suited for the IoT market. No discussion if it will be used in the 50 Billion things connected, and if so how do the “graphics” core help the IoT scenario.




Written by Ashu Joshi

June 27, 2015 at 12:35 pm

Wide Area IoT & Sensor Networks: Momentum Is Building

with one comment

There has been a steady movement to build dedicated networks for connecting sensors and devices to address the challenges posed by existing data networks (e.g. Cellular/3G/4G or Satellite) networks. The two primary contenders in the long or wide range IoT/Sensor networks are LoRA (by Semtech) and SigFox. Two recent events indicate the momentum is building:

Samsung announced an undisclosed investment in SigFox’s $115M round that has not yet closed. In addition, Samsung is going to partner with SigFox on its Artik Platform.

Actility has received $25M funding led by Ginko Ventures. Two notable things – it will enable Actility to put momentum behind its ThingPark platform. The platform leverages the LoRA wide area network. Ginko is backed by Foxconn (world’s largest electronics manufacturing company – Apple manufactures the iPhone with Foxconn). Additional investors include KPN, Swisscom & Orange.

Written by Ashu Joshi

June 20, 2015 at 4:24 pm

Really, Dell? Is this your IoT Strategy?

leave a comment »

I have Google Alerts set up on “Internet of Things”. While most of the alerts from the weekly email were clogged with the announcement of Google’s Brillo  – one of the links was about Dell’s IoT Strategy. The title of the post was classic click bait:

Screen Shot 2015-05-30 at 12.09.46 AM


I was curious on what beans are being spilled. What I read was utter disappointment and made me think of every technology company is trying to hitch itself to the IoT Bandwagon. The essence of the Dell’s IoT Strategy seems to be centered around selling an Intel Celeron based PC that is being called a “Gateway”, and without any specific OS! I was expecting more but then I had to remind myself – building IoT applications is not trivial, and requires understanding and executing the notion of distributed applications that integrate with various business processes to unlock value. And that something Dell is not good at. It is a “box” company trying to cash on the IoT buzz.





Written by Ashu Joshi

May 29, 2015 at 11:32 pm

The Step(s) Before Predictive Analytics in IoT

leave a comment »

There is no doubt that “Internet of Things” or IoT is the buzz word in technology – destined to change our future forever. And yes I do believe that IoT will change our future or rather the change has begun, and only accelerating. However another buzz word that marries IoT with Big Data – is Predictive Analytics. It is all about how all the sensor and machine data can help find patterns, and be beneficial for use cases like predicting equipment failure. IoT Analytics, I feel, at times is a bubble within a bubble.

Is Predictive Analytics important? Yes it is but not at the cost of how IoT can change the present – not just the future. I see the following steps before you can get into a predictive stage for the mass market (i.e. some use cases probably have crossed several of these steps):

1. Reliable connectivity of sensors, and reliable transfer of data – example did the temperature spike because it actually became hot or it was anomaly because of loose connections or software glitch?

2. If already connected in the world of instrumentation such as SCADA or other legacy proprietary protocols – migrating either the connectivity or the data from the sensors over to the world of IP & Cloud Computing

3. Making present, and the near present better than before: actions that can be taken without the need for predicting the future such as shut down the equipment right now because the temperature has actually spiked.

Predicting the future is one of the many benefits of IoT.

Written by Ashu Joshi

May 15, 2015 at 6:43 pm

The New PaaS

leave a comment »

The Internet of Things (IoT) enables a new class of service – Product as a Service. A world where devices, things, appliances, objects become a service being offered. This had started at the high end of market many years ago even before IoT went mainstream and technology analysts forecasting numbers on the “billions” of devices that would be connected. A great example was that of Rolls Royce and its airplane engine division that even before 2009 was generating significant service revenue by monitoring the jet engines while in flight: Britain’s lonely high-flier.

Imagine a world where you don’t pay for the light bulbs or doors or HVACs upfront but you pay monthly – it sounds preposterous – doesn’t it? Take it a step further – home builders could bundle all these services – team up with a finance company and include it all in a single lease? It does sound preposterous but attempts of this nature are real. Consider what BMW & Solar City are doing – buy an EV from BMW, and you can have Solar Power included in your lease – it is the value being generated by connecting and controlling everything!


Written by Ashu Joshi

March 13, 2015 at 4:40 pm


Get every new post delivered to your Inbox.