All Things CC:

All things Commuication & Computing….

Archive for the ‘Computing’ Category

IoT Platforms: Dominance of AWS

with one comment

687474703a2f2f692e696d6775722e636f6d2f597961394149792e706e67Needless to say there is no dearth of IoT Platforms offering you the opportunity to get your “things” and “devices” connected, and reap the benefits of IoT.

It is interesting to note that Amazon continues to dominate in this segment of Cloud Computing as well. I ran a rudimentary script to lookup up where the developer sites are hosted for different IoT platforms, the results were pretty interesting – 8 out of 10 are being hosted on AWS (Disclaimer – it is not clear to me if their entire platform is on AWS or only the developer front end). This is actually 8 out of 9 since I wrote the script originally because Thingworx and Axeda platforms have merged (all three URLs, the two old ones, and the new ThingWorx.com resolve to the same IP – 104.130.163.78).

And the surprise was Nest – an Alphabet/Google Company is still (after more than two years of being acquired) – has its Developer site running on AWS!

Take a look at the screenshot of the output below, and if you want to run the script yourself, and try other sites – copy the script at Gist.

Screen Shot 2016-05-05 at 3.03.38 PM

Implications:

It also brings up an interesting challenge for these companies now that Amazon has AWS IoT – Cloud Services for Connected Devices.  AWS IoT may not offer the level of completeness that others may offer such as Ayla or Exosite but the AWS IoT feature set is comprehensive enough to reduce the differentiation between them. The other choice is to go with Google, Microsoft and IBM – and all three of them also have IoT enhancements and features to their cloud offerings.

The choice of not going with Cloud PaaS is equally devastating because it is going to be costly for IoT platforms or they will lack the scalability.

I feel this will accelerate consolidation in the IoT platform space (like Microsoft’s acquisition of Solair) or companies being unable to offer the scale that is needed for IoT.

 

Advertisement

Written by Ashu Joshi

May 5, 2016 at 2:27 pm

Docker: Introduction and How-To Access natively from Mac OS

leave a comment »

I have been exploring Docker technology for projects leveraging Node.js. My experimentation was all on my Retina Mac Book Pro – running Yosemite. Docker is a relatively new breed of virtualization technologies known as Containers. A commonly used analogy for Docker is to compare it to actual real-life containers or lego bricks: it provides a fundamental unit, and with it a way for an application to be portable and moveable, regardless of hardware.

Here is a quick snapshot from the “What is Docker?” page:

VM vs. Containers

“Docker” encompasses the following:

  • Docker client: this is what’s running in our machine. It’s the docker binary that we’ll be interfacing with whenever we open a terminal and type $ docker pull or $ docker run. It connects to the docker daemon which does all the heavy-lifting, either in the same host (in the case of Linux) or remotely (in our case, interacting with our VirtualBox VM).
  • Docker daemon: this is what does the heavy lifting of building, running, and distributing your Docker containers. (Refer to the “Docker Engine” in the diagram above)
  • Docker Images: docker images are the blueprints for our applications. Keeping with the container/lego brick analogy, they’re our blueprints for actually building a real instance of them. An image can be an OS like Ubuntu, but it can also be an Ubuntu with your web application and all its necessary packages installed.
  • Docker Container: containers are created from docker images, and they are the real instances of our containers/lego bricks. They can be started, run, stopped, deleted, and moved.
  • Docker Hub (Registry): a Docker Registry is a hosted registry server that can hold Docker Images. Docker (the company) offers a public Docker Registry called the Docker Hub which we’ll use in this tutorial, but they offer the whole system open-source for people to run on their own servers and store images privately.

Docker is a Linux technology, so it doesn’t run natively on Mac OS X. So we need a Linux VM to run the containers. There are two ways this could be done on a Mac OS X:

  • Use Docker’s Mac OS X approach. This involves using Boot2Docker – that has a built-in Linux Machine. This is straight-forward and well described. The boot2docker tool makes this about as easy as it can be by provisioning a Tiny Core Linux virtual machine running the Docker daemon and installing a Mac version of the Docker client which will communicate with that daemon.
  • Use a VM such as VirtualBox or VMWare Fusion – run Linux as the Guest OS in the VM. I will be covering this approach – there are several blog posts covering doing it with VirtualBox. The challenge with this approach, unlike, the boot2docker approach – is that all Docker commands need to be handled in the by logging into the Linux VM. This post gets into how to get native-ish experience on your Mac but you’re running a Docker host that doesn’t work with boot2docker specifically the Docker host is running on Ubuntu Trusty host running as a VMWare Fusion Guest OS.

The Basics:

1. I have a Ubuntu 64-Bit (Recommended for Docker) 14.04.02 Server (Command Line Only) installed as one of the Guest OSes under VMWare Fusion. I am using VMWare Fusion 7.1.2.

2. Install Docker following instructions from the Ubuntu/Linux section. There are prerequisites that need to be taken care of if you are running something other than Ubuntu Trusty 14.04. The install primarily involves the following command (details in the link above):

$ wget -qO- https://get.docker.com/ | sh

3. Our goal is to run Docker commands from Mac OS X terminal. The trick is to know that the Docker client does all of its interaction with the Docker daemon through a RESTful API. Any time you use docker build or docker run the Docker client is simply issuing HTTP requests to the Docker Remote API on your behalf. So we need to open up a “communication channel” and making the API to the Docker host in the virtual machine accessible to Mac OS X. Two steps are involved:

– Edit the /etc/default/docker file in Ubuntu Trusty to the line shown below

DOCKER_OPTS="-H tcp://0.0.0.0:2375″

and then restart the Docker service:

$ sudo service docker restart

4. The following command from the Ubuntu OS won’t work now since Docker access through a TCP Connection:

$ docker ps

This on the other hand should work:

$ docker -H tcp://127.0.0.1:2375 ps

5. Now we need to forward the IP Address / Port so that the Docker commands can be issued from Mac OS X. This involves configuring the VMShare Fusion NAT Settings for the Ubuntu Trusty Guest OS. We know we need to forward Port 2375. In addition we need to get the IP Address of the Docker engine or Docker Host running in Ubuntu. The way to get that is to run the command

$ ifconfig

This will dump the networking settings – look for docker0 – and note down the IP Address. In my case it was 172.17.42.1. This is important to understand. We are NOT looking for the IP Address of the Guest OS or the eth0 of the Guest OS but the Docker Host IP Address.

Next run ‘ifconfig‘ on the Mac OS X terminal, and getthe IP Address or rather the Adapter name assigned to the Ubuntu Guest OS. I have two Guest OSes installed – Windows 8.1 and Ubuntu Trusty. When I run ‘ifconfig’ command – I see a number of networking interfaces – and two that are labeled ‘vmnet1‘ and ‘vmnet8‘. The way to ensure that you pick the right interface is to look at the IP Address assigned to the ‘vmnet‘ in Mac OS X, and look at the eth0 interface in the Ubuntu Trusty. Pick the one that is on the same subnet, here is what I had:

vmnet8 on Mac OS X IP Address: 192.168.112.1
eth0 on Ubuntu Trusty Guest OS running VMWare Fusion: 192.168.112.129

Now that we know what instance we have to deal with for the VMWare Fusion – we need to modify the nat.conf for the right instance, I will use vmnet8 as the example:

$ Mac OS X> sudo vi /Library/Preferences/VMware\ Fusion/vmnet8/nat.conf

In this file look for the [incomingtcp] section, and add the following:

2375 = 172.17.42.1:2375

The format of this section is <external port number> = <VM’s IP Address>:<VM’s Port Number>. Note in the above example – the IP Address used for the VM’s IP Address is actually the address of the docker0 interface provided in the ifconfig command when run in Ubuntu.

6. We are close now. Make sure that you have brew on your Mac OS X (if you are developing on Mac, you probably already have it!). Install the Docker client in Mac OS X using (you :

$ Mac OS X> brew update 
$ Mac OS X> brew install docker
$ Mac OS X> docker --version

The last command should tell you if Docker was installed correctly and the version number. Assuming you have installed Docker on the Ubuntu Guest OS and Docker client on Mac OS X using brew minutes apart – you would have the latest version and they would be the same. The versions have to be identical for this to work.

7. Now you are set to access Docker commands from the Mac OS terminal, run the same command as above

$ Mac OS X> docker -H tcp://localhost:2375 ps

and it should work. The -H flag instructs the Docker client to connect to the API at the specified endpoint (instead of the default UNIX socket).

To avoid typing the flag, and the endpoint followed by the command add the following export, and you can access all Docker commands as you would natively.

export DOCKER_HOST=tcp://localhost:2375

Docker Away!

NOTE:

Opening/forwarding ports on the OS is NOT recommended on Production Environments. This is merely to ease development effort if you are doing it on your own Mac Book.

Written by Ashu Joshi

July 17, 2015 at 1:25 pm

Cloud Gaming: OnLive [Emulating Netflix?]

leave a comment »

Two things to consider, performance may not be in the extreme gaming segment, but just as HD Video Streaming over Internet is becoming possible, eventually Extreme Graphics gaming could take off. May be Netflix should consider buying them to complement their offering?

The MicroConsole is powered by Marvell Armada 1000, if OnLive takes off, this could be a boost for the $MRVL stock as well [Read Engadget Review here which talks about the Armada 1000: http://engt.co/hDElDq]

Amplify’d from www.businessweek.com
Using a palm-size MicroConsole adapter hooked up to your TV and home network, you play “in the cloud”over the Internet, with the games actually running on powerful servers that might be 1,000 miles away.
This has the potential to be disruptive, maybe even revolutionary, technology. The $99 adapter plugs into a high-definition port on your television, and comes with a wireless handheld controller and one game.
and OnLive is also introducing a Netflix (NFLX)-like (NFLX) all-you-can-play plan for $9.99 a month for a selection of its titles.

Read more at www.businessweek.com

Written by Ashu Joshi

January 25, 2011 at 10:31 am

Posted in Cloud Computing

Tagged with , , , ,

How Prototyping and-or Beta Improves Success

leave a comment »

I left my last post on the note of importance of prototyping.  I would argue “prototyping” was always important but thanks technology advances such as Cloud Computing & Web 2.0 have made it accessible. The Lean Startup Movement promotes the idea of betas and early prototypes.

1.       Software, especially Web-based, is easy to prototype, and no-brainer to implement as a part of company or product evolution. Aza Raskin, creative lead for Firefox, posted an excellent slide deck on the advantages of prototyping.

2.       Consumers have embraced the concept of Beta, products introduced with minimal but most important features: Google has made the concept of a product in “Beta” very popular and acceptable.

3.       Investors, Bloggers have endorsed the idea of “prototyping” – finding customers early on, as an example, Vivek Wadhwa recently wrote about “looking before you leap” and talked about a startup that has been experimenting & prototyping for 14 months.

Written by Ashu Joshi

July 15, 2010 at 10:09 pm

How To: Develop for Plug Computers

leave a comment »

Here is a short-list of sites that would be useful if you are developing for Plug Computers:

1. The Master Marvell Site: http://plugcomputer.org/

The site is sponsored and supported by Marvell who created the category of Plug Computing. Using this site you can find almost all the information you need on Plug Computers the software, the hardware and the tools.

2. Useful site within the master site above is the Plug Wiki: http://plugcomputer.org/plugwiki/index.php?title=Main_Page

3. A very good tool for flashing or burning Bootloaders & OS Images on the Plug is the ESIA Tool which can be found here: http://sourceforge.net/projects/esia/. This tool can be used from Linux and Windows.

4. You can participate & search the Forums hosted by the PlugComptuer.org site: http://plugcomputer.org/plugforum/index.php?action=forum

5. Another site that has information on Plugs: http://computingplugs.com/index.php/Main_Page

6. Big fans and open source developers of Plug Computers are Mike Staszel and Ian Botley they run what is known as PlugApps (used to be OpenPogo). They have a version of Linux with many thousands of packages for developing apps on the Plug very easy!

Happy Plug Computing!

Written by Ashu Joshi

July 6, 2010 at 10:58 am

Posted in Plug Computers

Tagged with

Innovative Platforms: Plug Computers

leave a comment »

Marvell Technology introduced a new category of computing at the beginning of 2009 called Plug Computing – computers that plug directly into electrical sockets. The strategy behind Marvell’s effort was to create a community around Plug Computers and to a larger degree they have been pretty successful.

Take a look at the number of partners and community development sites have come up in the last 12 months at the community site here and notable among these are the Pogoplug by Cloud Engines and an open source community being built by a student – Mike Staszel at Openpogo. Major tech bloggers such as Om Malik have nothing but praise for the simplicity and ease of Pogoplug.

The Plugs are using high-performance processors – they run the Marvell Sheeva ARM-core powered processor with very high integration, low-power consumption at 1.2GHz.

The initiative is innovative in its packaging of a computing platform – in the form of a “plug” – and while there may be many processors in the market that could fit in the same form factor (as of a Plug) – the Sheeva from Marvell is a very good combination of features and functions in the given small form factor.

The packaging innovation is being taken a step further – establishing a community led development strategy for the Plug Computers. Earlier this year I attempted to start hacking on ARM-core based systems by getting the TI Beagleboard which I wrote about here – but I found the entire process complicated and after getting the board never did anything with it. On the other hand getting started with the SheevaPlug was so much simpler (except the ordering process)….

Marvell, of course, hopes that this community driven development model on its processor will lead to an “iPhone SDK” style model and promoting richer software for the Plugs and hence in turn selling more processors.

Marvell is on the right track – the CES in Las Vegas in January should show a glimpse of what is in store for Plug Computing.

Written by Ashu Joshi

December 24, 2009 at 4:07 pm

The Dell Dilemma

leave a comment »

I write this as Dell is about to announce their latest quarterly results on May 28th. Dell’s struggles in the past few years are well documented in blogs (technical and financial) and their own financial results, even after the comeback of Michael Dell the struggle has continued. Here is a sampling:

  1. Dell’s current revenues are steeped in PCs & Notebooks. The financial year ending January 31st, 2009 had 60% of its revenues in Mobility & PCs – Mobility accounting 31% of the revenues.
  2. Even though there are some predictions in the market that the business have quit slashing their IT budgets – it is difficult to envision how Dell is going to come out at the top. Year over year revenues are stagnant at $61.1 billion and the profit is down from $2.94 billion to $2.47 billion.
  3. Dell still seems to be dabbling with no clear sense of strategy and direction. Dell has done business selling flat panel TVs and handheld devices.
  4. And recently has been attempting to introduce a Smart Phone! Doing a Google Search on “Dell Cell Phone” results in the first entry indicating that they have been mulling a cell phone since 2007! But as you search and read on you find out that carriers have more or less rejected the Dell Cell Phone citing it as too dull. Oh and as I write this and run various Google searches I come across a post which says the Dell’s Cell Phone is Dead!
  5. Dell also has recently launched the Adamo series to compete with the Macbook Air from Apple. It has been slow on the Netbook market trend as well. And, of course, like many others, Dell is also rumored to be tinkering with Android based Netbook!
  6. Dell is going to face an interesting challenge on the server front thanks to Cloud Offerings by the likes of Amazon and Google in addition to the traditional competition from Sun, HP, and IBM. Not to make things easier even Cisco has entered the Server Market recently!
  7. Net-net this lack of direction is well summarized here!

So what is Dell to do, IMHO:

As a recent investor in DELL and watching their stock price meander with no direction, I was prompted to write this post.

Dell needs to make radical changes on what and how they build their products. They need to enter product categories that allow them to create a Blue Ocean or Purple Cow. Simply speaking they need to out-compete their competition – introducing just another Smart Phone or a Netbook is probably not going to be a game changer.

The down-turn in the industry is a good time for Dell to re-invent itself.

Written by Ashu Joshi

May 25, 2009 at 10:57 am

Ericsson & Cloud Computing

leave a comment »

Ericsson usually does not figure in the Cloud Computing conversations and blogs  – at least the ones that I have been following (and I checked, they are not part of the supporters in the Cloud Manifesto project).  However I think there may be a case to be made that Ericsson may be in an excellent position to offer Cloud Computing services – may be as managed services which they excel at. How did I arrive at

Being a part of Cisco’s Service Provider group I regularly track  news on Ericsson and Nokia Siemens Network (NSN)  and the first bit of information that was lodged in my mind was an article in the Technology section of NYTimes titled “Ericsson and Nokia Siemens Are Managing Just Fine“.  To quote from the article:

Ericsson manages all or parts of the networks of 230 mobile operators with a total 225 million customers.

It is significant revenue for Ericsson earning them $1.7 billion last year from managing the mobile networks.  With this bit of information fresh in my mind, I ended up last night attending a presentation by Brad Anderson on Erlang at the Atlanta AWSome meetup last night.

Brad gave an introduction to Erlang and the history of its development with Ericsson.  His point that Erland is extremely well-suited for building Cloud Computing platforms and from his slide titled “The Three Biggies”, here are the reasons why Erlang is ideal for Cloud Computing:

  • Massively Concurrent
  • Seamlessly Distributed
  • Fault Tolerant

The slides from an earlier presentation by Brad can be found here which make a good case on “Why Erlang?”. They also talk about how Erlang is catching up with major social networking sites such as LinkedIn and Facebook.  Erlang, as I learnt, has been around since 1990 and used actively by Ericsson on their Telephone Infrastructure unit and augmented by the Open Telecom Plaform (OTP).  And guess what Erlang is Open Source!

It is difficult to tell how much of Ericsson equipment and how much of the Erlang based technology is being used to manage the mobile networks but it is certainly interesting to note that the combination of a technology expertise (programming platform/langauge) in Erlang and their massive experience in managing networks and operations would be very useful in providing Cloud Computing Services.

The real question is does really think of itself to in a position to provide Cloud Computing services?

Written by Ashu Joshi

April 15, 2009 at 10:41 am