All Things CC:

All things Commuication & Computing….

Archive for the ‘Cloud Computing’ Category

IoT Platforms: Dominance of AWS

with one comment

687474703a2f2f692e696d6775722e636f6d2f597961394149792e706e67Needless to say there is no dearth of IoT Platforms offering you the opportunity to get your “things” and “devices” connected, and reap the benefits of IoT.

It is interesting to note that Amazon continues to dominate in this segment of Cloud Computing as well. I ran a rudimentary script to lookup up where the developer sites are hosted for different IoT platforms, the results were pretty interesting – 8 out of 10 are being hosted on AWS (Disclaimer – it is not clear to me if their entire platform is on AWS or only the developer front end). This is actually 8 out of 9 since I wrote the script originally because Thingworx and Axeda platforms have merged (all three URLs, the two old ones, and the new ThingWorx.com resolve to the same IP – 104.130.163.78).

And the surprise was Nest – an Alphabet/Google Company is still (after more than two years of being acquired) – has its Developer site running on AWS!

Take a look at the screenshot of the output below, and if you want to run the script yourself, and try other sites – copy the script at Gist.

Screen Shot 2016-05-05 at 3.03.38 PM

Implications:

It also brings up an interesting challenge for these companies now that Amazon has AWS IoT – Cloud Services for Connected Devices.  AWS IoT may not offer the level of completeness that others may offer such as Ayla or Exosite but the AWS IoT feature set is comprehensive enough to reduce the differentiation between them. The other choice is to go with Google, Microsoft and IBM – and all three of them also have IoT enhancements and features to their cloud offerings.

The choice of not going with Cloud PaaS is equally devastating because it is going to be costly for IoT platforms or they will lack the scalability.

I feel this will accelerate consolidation in the IoT platform space (like Microsoft’s acquisition of Solair) or companies being unable to offer the scale that is needed for IoT.

 

Advertisement

Written by Ashu Joshi

May 5, 2016 at 2:27 pm

Docker: Introduction and How-To Access natively from Mac OS

leave a comment »

I have been exploring Docker technology for projects leveraging Node.js. My experimentation was all on my Retina Mac Book Pro – running Yosemite. Docker is a relatively new breed of virtualization technologies known as Containers. A commonly used analogy for Docker is to compare it to actual real-life containers or lego bricks: it provides a fundamental unit, and with it a way for an application to be portable and moveable, regardless of hardware.

Here is a quick snapshot from the “What is Docker?” page:

VM vs. Containers

“Docker” encompasses the following:

  • Docker client: this is what’s running in our machine. It’s the docker binary that we’ll be interfacing with whenever we open a terminal and type $ docker pull or $ docker run. It connects to the docker daemon which does all the heavy-lifting, either in the same host (in the case of Linux) or remotely (in our case, interacting with our VirtualBox VM).
  • Docker daemon: this is what does the heavy lifting of building, running, and distributing your Docker containers. (Refer to the “Docker Engine” in the diagram above)
  • Docker Images: docker images are the blueprints for our applications. Keeping with the container/lego brick analogy, they’re our blueprints for actually building a real instance of them. An image can be an OS like Ubuntu, but it can also be an Ubuntu with your web application and all its necessary packages installed.
  • Docker Container: containers are created from docker images, and they are the real instances of our containers/lego bricks. They can be started, run, stopped, deleted, and moved.
  • Docker Hub (Registry): a Docker Registry is a hosted registry server that can hold Docker Images. Docker (the company) offers a public Docker Registry called the Docker Hub which we’ll use in this tutorial, but they offer the whole system open-source for people to run on their own servers and store images privately.

Docker is a Linux technology, so it doesn’t run natively on Mac OS X. So we need a Linux VM to run the containers. There are two ways this could be done on a Mac OS X:

  • Use Docker’s Mac OS X approach. This involves using Boot2Docker – that has a built-in Linux Machine. This is straight-forward and well described. The boot2docker tool makes this about as easy as it can be by provisioning a Tiny Core Linux virtual machine running the Docker daemon and installing a Mac version of the Docker client which will communicate with that daemon.
  • Use a VM such as VirtualBox or VMWare Fusion – run Linux as the Guest OS in the VM. I will be covering this approach – there are several blog posts covering doing it with VirtualBox. The challenge with this approach, unlike, the boot2docker approach – is that all Docker commands need to be handled in the by logging into the Linux VM. This post gets into how to get native-ish experience on your Mac but you’re running a Docker host that doesn’t work with boot2docker specifically the Docker host is running on Ubuntu Trusty host running as a VMWare Fusion Guest OS.

The Basics:

1. I have a Ubuntu 64-Bit (Recommended for Docker) 14.04.02 Server (Command Line Only) installed as one of the Guest OSes under VMWare Fusion. I am using VMWare Fusion 7.1.2.

2. Install Docker following instructions from the Ubuntu/Linux section. There are prerequisites that need to be taken care of if you are running something other than Ubuntu Trusty 14.04. The install primarily involves the following command (details in the link above):

$ wget -qO- https://get.docker.com/ | sh

3. Our goal is to run Docker commands from Mac OS X terminal. The trick is to know that the Docker client does all of its interaction with the Docker daemon through a RESTful API. Any time you use docker build or docker run the Docker client is simply issuing HTTP requests to the Docker Remote API on your behalf. So we need to open up a “communication channel” and making the API to the Docker host in the virtual machine accessible to Mac OS X. Two steps are involved:

– Edit the /etc/default/docker file in Ubuntu Trusty to the line shown below

DOCKER_OPTS="-H tcp://0.0.0.0:2375″

and then restart the Docker service:

$ sudo service docker restart

4. The following command from the Ubuntu OS won’t work now since Docker access through a TCP Connection:

$ docker ps

This on the other hand should work:

$ docker -H tcp://127.0.0.1:2375 ps

5. Now we need to forward the IP Address / Port so that the Docker commands can be issued from Mac OS X. This involves configuring the VMShare Fusion NAT Settings for the Ubuntu Trusty Guest OS. We know we need to forward Port 2375. In addition we need to get the IP Address of the Docker engine or Docker Host running in Ubuntu. The way to get that is to run the command

$ ifconfig

This will dump the networking settings – look for docker0 – and note down the IP Address. In my case it was 172.17.42.1. This is important to understand. We are NOT looking for the IP Address of the Guest OS or the eth0 of the Guest OS but the Docker Host IP Address.

Next run ‘ifconfig‘ on the Mac OS X terminal, and getthe IP Address or rather the Adapter name assigned to the Ubuntu Guest OS. I have two Guest OSes installed – Windows 8.1 and Ubuntu Trusty. When I run ‘ifconfig’ command – I see a number of networking interfaces – and two that are labeled ‘vmnet1‘ and ‘vmnet8‘. The way to ensure that you pick the right interface is to look at the IP Address assigned to the ‘vmnet‘ in Mac OS X, and look at the eth0 interface in the Ubuntu Trusty. Pick the one that is on the same subnet, here is what I had:

vmnet8 on Mac OS X IP Address: 192.168.112.1
eth0 on Ubuntu Trusty Guest OS running VMWare Fusion: 192.168.112.129

Now that we know what instance we have to deal with for the VMWare Fusion – we need to modify the nat.conf for the right instance, I will use vmnet8 as the example:

$ Mac OS X> sudo vi /Library/Preferences/VMware\ Fusion/vmnet8/nat.conf

In this file look for the [incomingtcp] section, and add the following:

2375 = 172.17.42.1:2375

The format of this section is <external port number> = <VM’s IP Address>:<VM’s Port Number>. Note in the above example – the IP Address used for the VM’s IP Address is actually the address of the docker0 interface provided in the ifconfig command when run in Ubuntu.

6. We are close now. Make sure that you have brew on your Mac OS X (if you are developing on Mac, you probably already have it!). Install the Docker client in Mac OS X using (you :

$ Mac OS X> brew update 
$ Mac OS X> brew install docker
$ Mac OS X> docker --version

The last command should tell you if Docker was installed correctly and the version number. Assuming you have installed Docker on the Ubuntu Guest OS and Docker client on Mac OS X using brew minutes apart – you would have the latest version and they would be the same. The versions have to be identical for this to work.

7. Now you are set to access Docker commands from the Mac OS terminal, run the same command as above

$ Mac OS X> docker -H tcp://localhost:2375 ps

and it should work. The -H flag instructs the Docker client to connect to the API at the specified endpoint (instead of the default UNIX socket).

To avoid typing the flag, and the endpoint followed by the command add the following export, and you can access all Docker commands as you would natively.

export DOCKER_HOST=tcp://localhost:2375

Docker Away!

NOTE:

Opening/forwarding ports on the OS is NOT recommended on Production Environments. This is merely to ease development effort if you are doing it on your own Mac Book.

Written by Ashu Joshi

July 17, 2015 at 1:25 pm

Cloud Gaming: OnLive [Emulating Netflix?]

leave a comment »

Two things to consider, performance may not be in the extreme gaming segment, but just as HD Video Streaming over Internet is becoming possible, eventually Extreme Graphics gaming could take off. May be Netflix should consider buying them to complement their offering?

The MicroConsole is powered by Marvell Armada 1000, if OnLive takes off, this could be a boost for the $MRVL stock as well [Read Engadget Review here which talks about the Armada 1000: http://engt.co/hDElDq]

Amplify’d from www.businessweek.com
Using a palm-size MicroConsole adapter hooked up to your TV and home network, you play “in the cloud”over the Internet, with the games actually running on powerful servers that might be 1,000 miles away.
This has the potential to be disruptive, maybe even revolutionary, technology. The $99 adapter plugs into a high-definition port on your television, and comes with a wireless handheld controller and one game.
and OnLive is also introducing a Netflix (NFLX)-like (NFLX) all-you-can-play plan for $9.99 a month for a selection of its titles.

Read more at www.businessweek.com

Written by Ashu Joshi

January 25, 2011 at 10:31 am

Posted in Cloud Computing

Tagged with , , , ,

Ericsson & Cloud Computing

leave a comment »

Ericsson usually does not figure in the Cloud Computing conversations and blogs  – at least the ones that I have been following (and I checked, they are not part of the supporters in the Cloud Manifesto project).  However I think there may be a case to be made that Ericsson may be in an excellent position to offer Cloud Computing services – may be as managed services which they excel at. How did I arrive at

Being a part of Cisco’s Service Provider group I regularly track  news on Ericsson and Nokia Siemens Network (NSN)  and the first bit of information that was lodged in my mind was an article in the Technology section of NYTimes titled “Ericsson and Nokia Siemens Are Managing Just Fine“.  To quote from the article:

Ericsson manages all or parts of the networks of 230 mobile operators with a total 225 million customers.

It is significant revenue for Ericsson earning them $1.7 billion last year from managing the mobile networks.  With this bit of information fresh in my mind, I ended up last night attending a presentation by Brad Anderson on Erlang at the Atlanta AWSome meetup last night.

Brad gave an introduction to Erlang and the history of its development with Ericsson.  His point that Erland is extremely well-suited for building Cloud Computing platforms and from his slide titled “The Three Biggies”, here are the reasons why Erlang is ideal for Cloud Computing:

  • Massively Concurrent
  • Seamlessly Distributed
  • Fault Tolerant

The slides from an earlier presentation by Brad can be found here which make a good case on “Why Erlang?”. They also talk about how Erlang is catching up with major social networking sites such as LinkedIn and Facebook.  Erlang, as I learnt, has been around since 1990 and used actively by Ericsson on their Telephone Infrastructure unit and augmented by the Open Telecom Plaform (OTP).  And guess what Erlang is Open Source!

It is difficult to tell how much of Ericsson equipment and how much of the Erlang based technology is being used to manage the mobile networks but it is certainly interesting to note that the combination of a technology expertise (programming platform/langauge) in Erlang and their massive experience in managing networks and operations would be very useful in providing Cloud Computing Services.

The real question is does really think of itself to in a position to provide Cloud Computing services?

Written by Ashu Joshi

April 15, 2009 at 10:41 am