Smartphones and all kinds of GPS-enabled devices are enabling highly accurate location for everything. As consumers with Smartphones, we leverage Web-based Mapping frequently especially for navigation with apps such as Google Maps or Apple Maps.
Location is becoming a critical context to both consumer (e.g. geo-tagged photos) and enterprise applications (e.g. tracking your fleet & assets). Visualizing this information on a map is necessary, and by token understanding Web Mapping is critical.
I have been involved in building a asset tracking solution and personally wanted to delve deeper into the technology behind Web Mapping over the last 4 weeks or so.
Here are the resources I discovered and learnt:
- Primer: Study through this excellent set of slides hosted by Maptime.IO. This is the best place to get started. Don’t worry if you have a hard time understanding the first time. Come back to it after sometime if the first going is rough.
- The gold standard for web mapping, by far, is Google Maps. Google provides both free and paid developer API services. Google provides APIs for calculating distances, getting directions, and the tiles they have are tremendous source of information. Here is a link to get started with Web Mapping & Google Maps APIs.
- Open Source & Crowdsourcing has been at work in the world of Web Mapping as well. Open Street Maps (OSM) is a crowd-funded movement to map the world. The primary challenge for using OSM is that there are places for which the data is still lacking so when you attempt to zoom-in – the information coming is going to lack details. However it is open source, and free. And digging deeper by using OSM will make you an expert at Web Mapping.
- The primer would have mentioned about Google Maps and OSM, also something known as Leaflet. I think right after you have gone over the Primer material listed in #2 above, and if you know HTML/CSS/JS – you want to hop over to the Leaflet Tutorial site and start building maps. Leaflet was created by Vladimir Agafonkin – he works at Mapbox.
- Mapbox defines itself as – an open source mapping platform for custom designed maps. Check out their site, and I would recommend start with the Mapbox JS Library.
I have built, as a tutorial and learning experience, a web map that leverages Leaflet and OSM to show an interactive map of the US. You can select different overlays on the map for US Population Density, State-by-State Renewable Energy goals and Earthquake data from USGS. You can check it out by clicking the image below or going to https://ashujoshi.github.io/leaflet-maps
Verizon is getting aggressive in growing its IoT business. Verizon’s first foray in IoT was in Smart Home when they launched a service around the solution from 4Home (acquired by Motorola, my guess is that the acquisition was influenced by Verizon?). It was a DIY service unlike its peers who had launched managed services (Comcast, AT&T, Cox, ADT), and IMO it was dead on arrival. It limped along for 4 years and finally shut down.
Verizon’s strategy also seemed uncertain when they acquired Hughes Telematics back in June of 2012. Hughes Telematics is based in Atlanta – and I have only heard of anecdotes and rumors of that division constantly losing people or being laid off since 2012. It felt that their Connected Car strategy was falling apart.
However recent events point to a different story – they are getting serious about this space. They have announced two back to back acquisitions. First with Telogis in June of this year and it was followed by Fleetmatics in August. Verizon certainly has heft between the three acquisitions in the Connected Car & Telematics space.
And to keep the momentum rolling – Verizon announced that it is acquiring Sensity Systems, a Smart City startup last week.
The question though is does it have the internal organizational strength and discipline to make the most of all of these acquisitions. Remember that they have also announced that they are acquiring Yahoo!
List of all the VZ Acquisitions as compiled by Crunchbase: https://www.crunchbase.com/organization/verizon/acquisitions
Here is list of analysis on the acquisitions worth reading:
LoRaWAN™ defines the communication protocol and system architecture for the network while the LoRa® physical layer enables the long-range communication link. The protocol and network architecture have the most influence in determining the battery lifetime of a node, the network capacity, the quality of service, the security, and the variety of applications served by the network.
This post will give a quick overview of the ingredients into building a complete LoRaWAN-enabled application. The software development or rather the details of it on each ingredient are topics for another day.
4 Primary Ingredients
To get started on building solutions using LoRa/LoRaWAN this is what you going to need:
- Thing: A LoRa radio module that you can hook up with a sensor… the challenge, especially in the US, it’s not easy to get something like that. The easiest, I think, is to get a dev kit such as the one from Multi-tech (combine the mDot with the UDK). It has a Arduino shield socket making it easy to plug in sensors and play around. Of course if are serious about using LoRa to enable your IoT applications you will end up making sophisticated devices such as the ones from Decent Lab.
- Gateway or Concentrator: You will need a Gateway or a Concentrator that speaks LoRa to the sensors & devices, and IP Connectivity to the Cloud – that can be for example Wi-Fi, Ethernet or 3G. LoRa Alliance is making a concerted effort to get everybody to adopt LoRaWAN as the protocol. I say that because if you wanted to you could build your own like Link Labs has decided to. My primary point is that while one side of the LoRa Gateway speaks “LoRa” – the other side will connect to the Internet (over TCP/IP or UDP/IP). The piece of software that interfaces with the backend is typically referred to as the Packet Forwarder – and has been open sourced by Semtech. Once again the choices are limited but while I used the Multi-tech Gateway – you could also get the gateway from Link Labs.
- LoRaWAN Network Server or Backend: This is the entity that speaks LoRaWAN to the Gateway and gets you the data to your application. Semtech the company behind LoRa hosts a backend that you could leverage. Multi-tech runs a “network server” on the Gateway itself. Or you could connect to the The Things Network (TTN for short, open-source, crowd-funded backend). On that note TTN is also going to be delivering cheap gateways. I first used the Network Server running on the Gateway, then made changes to make use of the TTN Backend. And finally I hacked using a combination of several open source backends – and ran it with my application backend.
- Application or Application Backend: The actual IoT application – that does something useful with the Thing – is most likely running on a private or public cloud would interface with the LoRaWAN network server and do the application-specific processing. The interface between the LoRaWAN backend and the Application Server is driven by the LoRaWAN backend provider – that is not a standard. TTN uses MQTT to do that, so does Multi-tech. This one is the easy part because you could use your own PC or Mac to run it or use any of public clouds for it. (Or may be a Smartphone App).
Note that if you (for the prototype or production) end up using a Network Service Provider such as Senet then you could skip on the Gateway. This is because companies like Senet want to offer LoRa as a Network Service, akin to how AT&T or Verizon provide cellular (both voice and data) service, and they set up “gateways” to cover a certain area with the LoRaWAN network. You could use TTN – but either you will have to be within the coverage of a community-hosted TTN Gateway or buy your own and hook it up to the TTN backend.
Getting all the 4 ingredients in place will enable you to truly understand the details of LoRaWAN and guide you on how to build solutions that address use cases well-matched with the attributes of LoRaWAN.
This is how the device kit looks like – it has a BME280 (Pressure, Temperature, Humidity), Light Level and GPS sensor attached to it. Also attached an NFC tag reader that I use for provisioning.
Internet of Things or IoT has been a megatrend worldwide for the last decade or so. I have seen a frequently quoted number from Cisco that indicates the number of connected devices will be 50B by 2020. But what is the math behind the number? How did Cisco arrive at that number? How many analysts, companies, presentations, and blogs have used that number?
I read through, again, the Cisco IBSG report which was published in 2011, and here is how the number was computed:
- Cisco started with a report from Forrester (actually in the Cisco IBSG PDF Report – the reference is to a number quoted by George Colony of Forrester Research in a post published by InfoWorld – March 2003) that states there are 500m Connected Devices
- Next take the number of connected devices in 2003, and get the number of devices per person – 500 Million divided by 6.3 Billion – resulting in 0.079 Devices per person.
- Using data from US Census Bureau and their internal IBSG data for 2010 – the next step was to state the number of Connected Devices was 12.5 Billion, and dividing that by the world population (6.8 Billion) results in 1.84 devices per person. Please note that per Cisco the 12.5 Billion number includes Smartphones and Tablet PCs.
- Cisco then used the work done by a team of researchers in China that showed the size of the Internet doubles every 5.32 years. The reference in the report is to the following: Internet Growth Follows Moore’s Law Too
- Next step was very easy – multiply the number from 2010 every 5 years – you don’t even need a calculator now – you get 25 Billion for 2015 (double from 12.5 in 2010, and that leads to 50B in 2020.
The 50B number does not get into a bread down by categories on the devices. We cannot make out from this number the percentage of sophisticated devices such as Smartphones or the percentage or low-power, low-cost number.
This number probably (and anecdotally) is one of the most cited number to show the growth of IoT. The math behind it looks simple and abstract but it helped propel the market forward!
Needless to say there is no dearth of IoT Platforms offering you the opportunity to get your “things” and “devices” connected, and reap the benefits of IoT.
It is interesting to note that Amazon continues to dominate in this segment of Cloud Computing as well. I ran a rudimentary script to lookup up where the developer sites are hosted for different IoT platforms, the results were pretty interesting – 8 out of 10 are being hosted on AWS (Disclaimer – it is not clear to me if their entire platform is on AWS or only the developer front end). This is actually 8 out of 9 since I wrote the script originally because Thingworx and Axeda platforms have merged (all three URLs, the two old ones, and the new ThingWorx.com resolve to the same IP – 220.127.116.11).
And the surprise was Nest – an Alphabet/Google Company is still (after more than two years of being acquired) – has its Developer site running on AWS!
Take a look at the screenshot of the output below, and if you want to run the script yourself, and try other sites – copy the script at Gist.
It also brings up an interesting challenge for these companies now that Amazon has AWS IoT – Cloud Services for Connected Devices. AWS IoT may not offer the level of completeness that others may offer such as Ayla or Exosite but the AWS IoT feature set is comprehensive enough to reduce the differentiation between them. The other choice is to go with Google, Microsoft and IBM – and all three of them also have IoT enhancements and features to their cloud offerings.
The choice of not going with Cloud PaaS is equally devastating because it is going to be costly for IoT platforms or they will lack the scalability.
I feel this will accelerate consolidation in the IoT platform space (like Microsoft’s acquisition of Solair) or companies being unable to offer the scale that is needed for IoT.
I was hesitant, for good reasons, on pulling the trigger on getting an iPad Pro. I managed to locate a Space Grey 128GB version yesterday at a Best Buy location near my home (it was the only one in the City of Atlanta and suburbs across all different stores that I could get my hands on physically – one day after launch).
My primary reason for hesitation was what if I won’t put it to use – and it would be a wastage of the money. Case in point I had bought the 1st and 3rd Generation iPads – and both were used only for watching movies on flights but no real work. I had also bought a Logitech Keyboard case for the 3rd Generation iPad (the model before they switched to Lighting connectors) hoping I would do real work on it. I did use the iPad but I could have also done without it. Here were the reasons:
- Too heavy to hold in the hands and read especially in contrast to the iPad Mini or the Nexus 7 tablets.
- Did not have enough capabilities to do real work besides email mostly
My hesitation before buying the iPad Pro was also related to the Pro being big, may be bulky – and I knew reading wise it could be a challenge. None the less – I wanted a “device” that had a large screen, and could function for work and entertainment – mostly after 9 to 5. I use for my work a 15″ Retina Mac Book Pro connected to a 27″ Apple Thunderbolt Display – and everyday I would undock it and carry it around for meetings or after work.
I had mixed feelings if the iPad Pro would fill the void described above so when I bought it yesterday I mentally prepared myself that this could very well be a waste of money just like the earlier iPads were more or less.
Having spent less than 24 hours with the iPad Pro, I am pleasantly surprised that I am actually loving it, here are the reasons:
- I read on the iPad Pro last night – in my bed – while I could not hold it in my hands I propped it up against my legs folded up. It was pretty good to read on it and browse and do email last night – simply because it has a large screen.
- The display is simply awesome. And I caught up on videos that I had transferred over to the Pro
- Email, writing thsi blog post, Microsoft Office experience was much better than previous iPads – not good as a laptop but for the objective I have in mind it does the trick.
- Reading Comics on iComics App or even on the Kindle App (I had bought Calvin & Hobbes) was fantastic on the large screen.
- I don’t sketch but I like to doodle or scribble when taking notes or thinking – and it was much better to do it than any digital experience before this. I scribbled and played around with the Jot Script Pro stylus in Penultimate and with the “Pencil” in Paper by 53.
I am going to wait on getting a Smart Keyboard. The Logitech Create is supposed to be better than the Apple one but I read that it is not easy to remove the iPad Pro from the Logitech Create – and being able to go without Keyboard or with it easily is going to be the key to success for my experience with the iPad Pro.
I am using an Apple small Bluetooth Keyboard, and the “Compass” by Twelve South to prop up the iPad Pro.
I have been exploring Docker technology for projects leveraging Node.js. My experimentation was all on my Retina Mac Book Pro – running Yosemite. Docker is a relatively new breed of virtualization technologies known as Containers. A commonly used analogy for Docker is to compare it to actual real-life containers or lego bricks: it provides a fundamental unit, and with it a way for an application to be portable and moveable, regardless of hardware.
Here is a quick snapshot from the “What is Docker?” page:
“Docker” encompasses the following:
- Docker client: this is what’s running in our machine. It’s the docker binary that we’ll be interfacing with whenever we open a terminal and type
$ docker pullor
$ docker run. It connects to the docker daemon which does all the heavy-lifting, either in the same host (in the case of Linux) or remotely (in our case, interacting with our VirtualBox VM).
- Docker daemon: this is what does the heavy lifting of building, running, and distributing your Docker containers. (Refer to the “Docker Engine” in the diagram above)
- Docker Images: docker images are the blueprints for our applications. Keeping with the container/lego brick analogy, they’re our blueprints for actually building a real instance of them. An image can be an OS like Ubuntu, but it can also be an Ubuntu with your web application and all its necessary packages installed.
- Docker Container: containers are created from docker images, and they are the real instances of our containers/lego bricks. They can be started, run, stopped, deleted, and moved.
- Docker Hub (Registry): a Docker Registry is a hosted registry server that can hold Docker Images. Docker (the company) offers a public Docker Registry called the Docker Hub which we’ll use in this tutorial, but they offer the whole system open-source for people to run on their own servers and store images privately.
Docker is a Linux technology, so it doesn’t run natively on Mac OS X. So we need a Linux VM to run the containers. There are two ways this could be done on a Mac OS X:
- Use Docker’s Mac OS X approach. This involves using Boot2Docker – that has a built-in Linux Machine. This is straight-forward and well described. The boot2docker tool makes this about as easy as it can be by provisioning a Tiny Core Linux virtual machine running the Docker daemon and installing a Mac version of the Docker client which will communicate with that daemon.
- Use a VM such as VirtualBox or VMWare Fusion – run Linux as the Guest OS in the VM. I will be covering this approach – there are several blog posts covering doing it with VirtualBox. The challenge with this approach, unlike, the boot2docker approach – is that all Docker commands need to be handled in the by logging into the Linux VM. This post gets into how to get native-ish experience on your Mac but you’re running a Docker host that doesn’t work with boot2docker specifically the Docker host is running on Ubuntu Trusty host running as a VMWare Fusion Guest OS.
1. I have a Ubuntu 64-Bit (Recommended for Docker) 14.04.02 Server (Command Line Only) installed as one of the Guest OSes under VMWare Fusion. I am using VMWare Fusion 7.1.2.
2. Install Docker following instructions from the Ubuntu/Linux section. There are prerequisites that need to be taken care of if you are running something other than Ubuntu Trusty 14.04. The install primarily involves the following command (details in the link above):
$ wget -qO- https://get.docker.com/ | sh
3. Our goal is to run Docker commands from Mac OS X terminal. The trick is to know that the Docker client does all of its interaction with the Docker daemon through a RESTful API. Any time you use
docker build or
docker run the Docker client is simply issuing HTTP requests to the Docker Remote API on your behalf. So we need to open up a “communication channel” and making the API to the Docker host in the virtual machine accessible to Mac OS X. Two steps are involved:
– Edit the /etc/default/docker file in Ubuntu Trusty to the line shown below
and then restart the Docker service:
$ sudo service docker restart
4. The following command from the Ubuntu OS won’t work now since Docker access through a TCP Connection:
$ docker ps
This on the other hand should work:
$ docker -H tcp://127.0.0.1:2375 ps
5. Now we need to forward the IP Address / Port so that the Docker commands can be issued from Mac OS X. This involves configuring the VMShare Fusion NAT Settings for the Ubuntu Trusty Guest OS. We know we need to forward Port 2375. In addition we need to get the IP Address of the Docker engine or Docker Host running in Ubuntu. The way to get that is to run the command
This will dump the networking settings – look for docker0 – and note down the IP Address. In my case it was 172.17.42.1. This is important to understand. We are NOT looking for the IP Address of the Guest OS or the eth0 of the Guest OS but the Docker Host IP Address.
Next run ‘ifconfig‘ on the Mac OS X terminal, and getthe IP Address or rather the Adapter name assigned to the Ubuntu Guest OS. I have two Guest OSes installed – Windows 8.1 and Ubuntu Trusty. When I run ‘ifconfig’ command – I see a number of networking interfaces – and two that are labeled ‘vmnet1‘ and ‘vmnet8‘. The way to ensure that you pick the right interface is to look at the IP Address assigned to the ‘vmnet‘ in Mac OS X, and look at the eth0 interface in the Ubuntu Trusty. Pick the one that is on the same subnet, here is what I had:
vmnet8 on Mac OS X IP Address: 192.168.112.1 eth0 on Ubuntu Trusty Guest OS running VMWare Fusion: 192.168.112.129
Now that we know what instance we have to deal with for the VMWare Fusion – we need to modify the nat.conf for the right instance, I will use vmnet8 as the example:
$ Mac OS X> sudo vi /Library/Preferences/VMware\ Fusion/vmnet8/nat.conf
In this file look for the [incomingtcp] section, and add the following:
2375 = 172.17.42.1:2375
The format of this section is <external port number> = <VM’s IP Address>:<VM’s Port Number>. Note in the above example – the IP Address used for the VM’s IP Address is actually the address of the docker0 interface provided in the ifconfig command when run in Ubuntu.
6. We are close now. Make sure that you have brew on your Mac OS X (if you are developing on Mac, you probably already have it!). Install the Docker client in Mac OS X using (you :
$ Mac OS X> brew update $ Mac OS X> brew install docker $ Mac OS X> docker --version
The last command should tell you if Docker was installed correctly and the version number. Assuming you have installed Docker on the Ubuntu Guest OS and Docker client on Mac OS X using brew minutes apart – you would have the latest version and they would be the same. The versions have to be identical for this to work.
7. Now you are set to access Docker commands from the Mac OS terminal, run the same command as above
$ Mac OS X> docker -H tcp://localhost:2375 ps
and it should work. The
-H flag instructs the Docker client to connect to the API at the specified endpoint (instead of the default UNIX socket).
To avoid typing the flag, and the endpoint followed by the command add the following export, and you can access all Docker commands as you would natively.
Opening/forwarding ports on the OS is NOT recommended on Production Environments. This is merely to ease development effort if you are doing it on your own Mac Book.