Archive for the ‘Computing’ Category
Needless to say there is no dearth of IoT Platforms offering you the opportunity to get your “things” and “devices” connected, and reap the benefits of IoT.
It is interesting to note that Amazon continues to dominate in this segment of Cloud Computing as well. I ran a rudimentary script to lookup up where the developer sites are hosted for different IoT platforms, the results were pretty interesting – 8 out of 10 are being hosted on AWS (Disclaimer – it is not clear to me if their entire platform is on AWS or only the developer front end). This is actually 8 out of 9 since I wrote the script originally because Thingworx and Axeda platforms have merged (all three URLs, the two old ones, and the new ThingWorx.com resolve to the same IP – 126.96.36.199).
And the surprise was Nest – an Alphabet/Google Company is still (after more than two years of being acquired) – has its Developer site running on AWS!
Take a look at the screenshot of the output below, and if you want to run the script yourself, and try other sites – copy the script at Gist.
It also brings up an interesting challenge for these companies now that Amazon has AWS IoT – Cloud Services for Connected Devices. AWS IoT may not offer the level of completeness that others may offer such as Ayla or Exosite but the AWS IoT feature set is comprehensive enough to reduce the differentiation between them. The other choice is to go with Google, Microsoft and IBM – and all three of them also have IoT enhancements and features to their cloud offerings.
The choice of not going with Cloud PaaS is equally devastating because it is going to be costly for IoT platforms or they will lack the scalability.
I feel this will accelerate consolidation in the IoT platform space (like Microsoft’s acquisition of Solair) or companies being unable to offer the scale that is needed for IoT.
I have been exploring Docker technology for projects leveraging Node.js. My experimentation was all on my Retina Mac Book Pro – running Yosemite. Docker is a relatively new breed of virtualization technologies known as Containers. A commonly used analogy for Docker is to compare it to actual real-life containers or lego bricks: it provides a fundamental unit, and with it a way for an application to be portable and moveable, regardless of hardware.
Here is a quick snapshot from the “What is Docker?” page:
“Docker” encompasses the following:
- Docker client: this is what’s running in our machine. It’s the docker binary that we’ll be interfacing with whenever we open a terminal and type
$ docker pullor
$ docker run. It connects to the docker daemon which does all the heavy-lifting, either in the same host (in the case of Linux) or remotely (in our case, interacting with our VirtualBox VM).
- Docker daemon: this is what does the heavy lifting of building, running, and distributing your Docker containers. (Refer to the “Docker Engine” in the diagram above)
- Docker Images: docker images are the blueprints for our applications. Keeping with the container/lego brick analogy, they’re our blueprints for actually building a real instance of them. An image can be an OS like Ubuntu, but it can also be an Ubuntu with your web application and all its necessary packages installed.
- Docker Container: containers are created from docker images, and they are the real instances of our containers/lego bricks. They can be started, run, stopped, deleted, and moved.
- Docker Hub (Registry): a Docker Registry is a hosted registry server that can hold Docker Images. Docker (the company) offers a public Docker Registry called the Docker Hub which we’ll use in this tutorial, but they offer the whole system open-source for people to run on their own servers and store images privately.
Docker is a Linux technology, so it doesn’t run natively on Mac OS X. So we need a Linux VM to run the containers. There are two ways this could be done on a Mac OS X:
- Use Docker’s Mac OS X approach. This involves using Boot2Docker – that has a built-in Linux Machine. This is straight-forward and well described. The boot2docker tool makes this about as easy as it can be by provisioning a Tiny Core Linux virtual machine running the Docker daemon and installing a Mac version of the Docker client which will communicate with that daemon.
- Use a VM such as VirtualBox or VMWare Fusion – run Linux as the Guest OS in the VM. I will be covering this approach – there are several blog posts covering doing it with VirtualBox. The challenge with this approach, unlike, the boot2docker approach – is that all Docker commands need to be handled in the by logging into the Linux VM. This post gets into how to get native-ish experience on your Mac but you’re running a Docker host that doesn’t work with boot2docker specifically the Docker host is running on Ubuntu Trusty host running as a VMWare Fusion Guest OS.
1. I have a Ubuntu 64-Bit (Recommended for Docker) 14.04.02 Server (Command Line Only) installed as one of the Guest OSes under VMWare Fusion. I am using VMWare Fusion 7.1.2.
2. Install Docker following instructions from the Ubuntu/Linux section. There are prerequisites that need to be taken care of if you are running something other than Ubuntu Trusty 14.04. The install primarily involves the following command (details in the link above):
$ wget -qO- https://get.docker.com/ | sh
3. Our goal is to run Docker commands from Mac OS X terminal. The trick is to know that the Docker client does all of its interaction with the Docker daemon through a RESTful API. Any time you use
docker build or
docker run the Docker client is simply issuing HTTP requests to the Docker Remote API on your behalf. So we need to open up a “communication channel” and making the API to the Docker host in the virtual machine accessible to Mac OS X. Two steps are involved:
– Edit the /etc/default/docker file in Ubuntu Trusty to the line shown below
and then restart the Docker service:
$ sudo service docker restart
4. The following command from the Ubuntu OS won’t work now since Docker access through a TCP Connection:
$ docker ps
This on the other hand should work:
$ docker -H tcp://127.0.0.1:2375 ps
5. Now we need to forward the IP Address / Port so that the Docker commands can be issued from Mac OS X. This involves configuring the VMShare Fusion NAT Settings for the Ubuntu Trusty Guest OS. We know we need to forward Port 2375. In addition we need to get the IP Address of the Docker engine or Docker Host running in Ubuntu. The way to get that is to run the command
This will dump the networking settings – look for docker0 – and note down the IP Address. In my case it was 172.17.42.1. This is important to understand. We are NOT looking for the IP Address of the Guest OS or the eth0 of the Guest OS but the Docker Host IP Address.
Next run ‘ifconfig‘ on the Mac OS X terminal, and getthe IP Address or rather the Adapter name assigned to the Ubuntu Guest OS. I have two Guest OSes installed – Windows 8.1 and Ubuntu Trusty. When I run ‘ifconfig’ command – I see a number of networking interfaces – and two that are labeled ‘vmnet1‘ and ‘vmnet8‘. The way to ensure that you pick the right interface is to look at the IP Address assigned to the ‘vmnet‘ in Mac OS X, and look at the eth0 interface in the Ubuntu Trusty. Pick the one that is on the same subnet, here is what I had:
vmnet8 on Mac OS X IP Address: 192.168.112.1 eth0 on Ubuntu Trusty Guest OS running VMWare Fusion: 192.168.112.129
Now that we know what instance we have to deal with for the VMWare Fusion – we need to modify the nat.conf for the right instance, I will use vmnet8 as the example:
$ Mac OS X> sudo vi /Library/Preferences/VMware\ Fusion/vmnet8/nat.conf
In this file look for the [incomingtcp] section, and add the following:
2375 = 172.17.42.1:2375
The format of this section is <external port number> = <VM’s IP Address>:<VM’s Port Number>. Note in the above example – the IP Address used for the VM’s IP Address is actually the address of the docker0 interface provided in the ifconfig command when run in Ubuntu.
6. We are close now. Make sure that you have brew on your Mac OS X (if you are developing on Mac, you probably already have it!). Install the Docker client in Mac OS X using (you :
$ Mac OS X> brew update $ Mac OS X> brew install docker $ Mac OS X> docker --version
The last command should tell you if Docker was installed correctly and the version number. Assuming you have installed Docker on the Ubuntu Guest OS and Docker client on Mac OS X using brew minutes apart – you would have the latest version and they would be the same. The versions have to be identical for this to work.
7. Now you are set to access Docker commands from the Mac OS terminal, run the same command as above
$ Mac OS X> docker -H tcp://localhost:2375 ps
and it should work. The
-H flag instructs the Docker client to connect to the API at the specified endpoint (instead of the default UNIX socket).
To avoid typing the flag, and the endpoint followed by the command add the following export, and you can access all Docker commands as you would natively.
Opening/forwarding ports on the OS is NOT recommended on Production Environments. This is merely to ease development effort if you are doing it on your own Mac Book.
Two things to consider, performance may not be in the extreme gaming segment, but just as HD Video Streaming over Internet is becoming possible, eventually Extreme Graphics gaming could take off. May be Netflix should consider buying them to complement their offering?
The MicroConsole is powered by Marvell Armada 1000, if OnLive takes off, this could be a boost for the $MRVL stock as well [Read Engadget Review here which talks about the Armada 1000: http://engt.co/hDElDq]
Using a palm-size MicroConsole adapter hooked up to your TV and home network, you play “in the cloud”over the Internet, with the games actually running on powerful servers that might be 1,000 miles away.
This has the potential to be disruptive, maybe even revolutionary, technology. The $99 adapter plugs into a high-definition port on your television, and comes with a wireless handheld controller and one game.
I left my last post on the note of importance of prototyping. I would argue “prototyping” was always important but thanks technology advances such as Cloud Computing & Web 2.0 have made it accessible. The Lean Startup Movement promotes the idea of betas and early prototypes.
1. Software, especially Web-based, is easy to prototype, and no-brainer to implement as a part of company or product evolution. Aza Raskin, creative lead for Firefox, posted an excellent slide deck on the advantages of prototyping.
2. Consumers have embraced the concept of Beta, products introduced with minimal but most important features: Google has made the concept of a product in “Beta” very popular and acceptable.
3. Investors, Bloggers have endorsed the idea of “prototyping” – finding customers early on, as an example, Vivek Wadhwa recently wrote about “looking before you leap” and talked about a startup that has been experimenting & prototyping for 14 months.
Here is a short-list of sites that would be useful if you are developing for Plug Computers:
1. The Master Marvell Site: http://plugcomputer.org/
The site is sponsored and supported by Marvell who created the category of Plug Computing. Using this site you can find almost all the information you need on Plug Computers – the software, the hardware and the tools.
2. Useful site within the master site above is the Plug Wiki: http://plugcomputer.org/plugwiki/index.php?title=Main_Page
3. A very good tool for flashing or burning Bootloaders & OS Images on the Plug is the ESIA Tool which can be found here: http://sourceforge.net/projects/esia/. This tool can be used from Linux and Windows.
4. You can participate & search the Forums hosted by the PlugComptuer.org site: http://plugcomputer.org/plugforum/index.php?action=forum
5. Another site that has information on Plugs: http://computingplugs.com/index.php/Main_Page
6. Big fans and open source developers of Plug Computers are Mike Staszel and Ian Botley – they run what is known as PlugApps (used to be OpenPogo). They have a version of Linux with many thousands of packages for developing apps on the Plug very easy!
Happy Plug Computing!
Marvell Technology introduced a new category of computing at the beginning of 2009 called Plug Computing – computers that plug directly into electrical sockets. The strategy behind Marvell’s effort was to create a community around Plug Computers and to a larger degree they have been pretty successful.
Take a look at the number of partners and community development sites have come up in the last 12 months at the community site here and notable among these are the Pogoplug by Cloud Engines and an open source community being built by a student – Mike Staszel at Openpogo. Major tech bloggers such as Om Malik have nothing but praise for the simplicity and ease of Pogoplug.
The Plugs are using high-performance processors – they run the Marvell Sheeva ARM-core powered processor with very high integration, low-power consumption at 1.2GHz.
The initiative is innovative in its packaging of a computing platform – in the form of a “plug” – and while there may be many processors in the market that could fit in the same form factor (as of a Plug) – the Sheeva from Marvell is a very good combination of features and functions in the given small form factor.
The packaging innovation is being taken a step further – establishing a community led development strategy for the Plug Computers. Earlier this year I attempted to start hacking on ARM-core based systems by getting the TI Beagleboard which I wrote about here – but I found the entire process complicated and after getting the board never did anything with it. On the other hand getting started with the SheevaPlug was so much simpler (except the ordering process)….
Marvell, of course, hopes that this community driven development model on its processor will lead to an “iPhone SDK” style model and promoting richer software for the Plugs and hence in turn selling more processors.
Marvell is on the right track – the CES in Las Vegas in January should show a glimpse of what is in store for Plug Computing.
I write this as Dell is about to announce their latest quarterly results on May 28th. Dell’s struggles in the past few years are well documented in blogs (technical and financial) and their own financial results, even after the comeback of Michael Dell the struggle has continued. Here is a sampling:
- Dell’s current revenues are steeped in PCs & Notebooks. The financial year ending January 31st, 2009 had 60% of its revenues in Mobility & PCs – Mobility accounting 31% of the revenues.
- Even though there are some predictions in the market that the business have quit slashing their IT budgets – it is difficult to envision how Dell is going to come out at the top. Year over year revenues are stagnant at $61.1 billion and the profit is down from $2.94 billion to $2.47 billion.
- Dell still seems to be dabbling with no clear sense of strategy and direction. Dell has done business selling flat panel TVs and handheld devices.
- And recently has been attempting to introduce a Smart Phone! Doing a Google Search on “Dell Cell Phone” results in the first entry indicating that they have been mulling a cell phone since 2007! But as you search and read on you find out that carriers have more or less rejected the Dell Cell Phone citing it as too dull. Oh and as I write this and run various Google searches I come across a post which says the Dell’s Cell Phone is Dead!
- Dell also has recently launched the Adamo series to compete with the Macbook Air from Apple. It has been slow on the Netbook market trend as well. And, of course, like many others, Dell is also rumored to be tinkering with Android based Netbook!
- Dell is going to face an interesting challenge on the server front thanks to Cloud Offerings by the likes of Amazon and Google in addition to the traditional competition from Sun, HP, and IBM. Not to make things easier even Cisco has entered the Server Market recently!
- Net-net this lack of direction is well summarized here!
So what is Dell to do, IMHO:
As a recent investor in DELL and watching their stock price meander with no direction, I was prompted to write this post.
Dell needs to make radical changes on what and how they build their products. They need to enter product categories that allow them to create a Blue Ocean or Purple Cow. Simply speaking they need to out-compete their competition – introducing just another Smart Phone or a Netbook is probably not going to be a game changer.
The down-turn in the industry is a good time for Dell to re-invent itself.