Tensor Flow Docker Image

TensorFlow Docker Installation & Setup

1.       Get the Tensor Flow Docker image
:/> docker pull tensorflow/tensorflow

2.       Start the instance
:/> docker run -it  -p 8888:8888 --rm tensorflow/tensorflow

3.       Once you access the url with port and token as shown in the above command

4.       Login to the tensor container
NOTE: "tensor" below is name of the conatiner otherwise use the container id.

Removing Docker images and containers

1. List & Remove images
docker images -a
docker rmi image-name1 image-name2

2. List & removeDangaling Images
docker images -f dangling=true
docker rmi $(docker images -f dangling=true -q)

3. Remove all images
docker rmi $(docker images -a -q)

4. List & Remove images by pattern
docker ps -a |  grep "pattern"
docker images | grep "pattern" | awk '{print $1}' | xargs docker rm

5. List & Remove containers

docker ps -a
docker rm container1_id container2_id

docker container rm contianer-name

6. Remove container upon exit
docker run --rm image_name

7. List & Remove all exited containers
docker ps -a -f status=exited
docker rm $(docker ps -a -f status=exited -q)

8. Listing  & Removing using Multiple filters
docker ps -a -f status=exited -f status=created
docker rm $(docker ps -a -f status=exited -f status=created -q)

9. List and Remove all containers by pattern

docker ps -a |  grep "pattern”
docker ps -a | grep "pattern" | awk '{print $3}' | xargs docker rmi

10. Stop and remove all containers
docker ps -a

docker stop $(docker ps -a -q) 
docker rm $(docker ps -a -q)

docker stop container-name

11. List & Removing Volumes
docker volume ls
docker volume rm volume_name volume_name

12. List and remove all dangling volumes
docker volume ls -f dangling=true
docker volume rm $(docker volume ls -f dangling=true -q)

13. Remove Volume and its container
docker rm -v container_name

Docker Swarm and Creating Cluster on Docker

Initializing and Joining Swarm
Sudo docker swarm init

NOTE: if you have multiple network interfaces on the host (or on guest VM) , then you need to specify the "--advertise-addr" with specific ip

Worked fine after specifying the ip

From the second VM , run the following command to join as worker node in swarm

To list all swarm nodes connected to manager

NOTE: Only swarm managers execute Docker commands; workers are just for capacity.

For deploying applications (service)

To Change no of Replicas (nodes)
a.       Simple modify the docker compose file
b.       Re run the stack deploy command as shown
:/> docker stack deploy -c docker-compose.yml getstartedlab

To list the service which are running
:/> docker service ls

To list the nodes where the service is running
:/>docker service ps getstartedlab

To remove (uninstall) the service
:/> docker service rm getstartedlab

Leaving Swarm
Sudo docker swarm leave

NOTE: you may need to use the "--force" to leave the last manager from the swarm.

Docker Volumes

Docker offers three different ways to mount data into a container from the Docker host: volumesbind mounts, or tmpfs volumes. When in doubt, volumes are almost always the right choice

·         Volumes are stored in a part of the host filesystem which is managed by Docker(/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
·         Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.
·         tmpfs mounts are stored in the host system’s memory only, and are never written to the host system’s filesystem

There are three main use cases for Docker data volumes:
1.      To keep data around when a container is removed
2.      To share data between the host filesystem and the Docker container
3.      To share data with other Docker containers

Volume Commands

Create a volume:
$ docker volume create my-vol

List volumes:
$ docker volume ls

Inspect a volume:
$ docker volume inspect my-vol

Remove a volume:
$ docker volume rm my-vol

start the container using volume
$ docker run -d -it --name=nginxtest -v nginx-vol:/usr/share/nginx/html nginx:latest

Clean container with volume
$ docker container stop nginxtest
$ docker container rm nginxtest
$ docker volume rm nginx-vol

For creating readonly containers
$ docker run -d -it --name=nginxtest -v nginx-vol:/usr/share/nginx/html:ro nginx:latest

Binding Mounts  (Sharing Data Between the Host and the Docker Container)

$ docker run -dit --name devtest -v "$(pwd)"/target:/app nginx:latest

$ docker run -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 -i nginx:latest

-v ~/nginxlogs:/var/log/nginx — This will set up a volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine. Docker uses a : to split the host's path from the container path, and the host path always comes first.

Installing Docker CE on Ubuntu

Before you install Docker CE (first time) on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.

1. Update the apt package index:

$ sudo apt-get update

2. Install packages to allow apt to use a repository over HTTPS:

$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \

3. Add Docker’s official GPG key:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

4. Verify that you now have the key with the fingerprint 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88, by searching for the last 8 characters of the fingerprint.

$ sudo apt-key fingerprint 0EBFCD88

pub   4096R/0EBFCD88 2017-02-22
      Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid                  Docker Release (CE deb) <docker@docker.com>
sub   4096R/F273FCD8 2017-02-22

5. Use the following command to set up the stable repository. You always need the stable repository, even if you want to install builds from the edge or testing repositories as well. To add the edge or testing repository, add the word edge or testing (or both) after the word stable in the commands below.

Note: The lsb_release -cs sub-command below returns the name of your Ubuntu distribution, such as xenial. Sometimes, in a distribution like Linux Mint, you might have to change $(lsb_release -cs) to your parent Ubuntu distribution. For example, if you are using Linux Mint Rafaela, you could use trusty.sudo

$ sudo add-apt-repository \
   $(lsb_release -cs) \

NOTE: It will sources url to "sources.list" under /etc/apt repository.


1. Update the apt package index.

$ sudo apt-get update

2. Install the latest version of Docker CE, or go to the next step to install a specific version. Any existing installation of Docker is replaced.

$ sudo apt-get install docker-ce

or to install specific version use below command
$ sudo apt-get install docker-ce=<VERSION>
The Docker daemon starts automatically.

3. Verify that Docker CE is installed correctly by running the hello-world image.

$ sudo docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.


Virtual Machines Vs Docker Container

What is docker?

A Docker container can be described as a wrap around a piece of software that contains everything needed in order to run the software. This is done in order to make sure that the app will run the same no matter what environment it runs in.

VirtualMachines Vs Docker Containers

VirtualBox and VMWare are virtualization apps that create virtual machines that are isolated at the hardware level.

Docker is a containerization app that isolates apps at software level.

irtual Machines
Docker Containers
Hardware level process isolation
OS level process isolation
VMs offer complete isolation of applications from host OS
Docker containers can share some resources with host OS
Each VM has separate OS
Each docker container can share OS resources
Boots in minutes
Boots in seconds
More resource usage
Less resource usage
Pre-configured VMs are hard to find and manage
Pre-built docker containers for home server apps already available
Customizing pre-configured VMs requires work
Building a custom setup with containers is easy
VMs are typically bigger in size as they contain whole OS underneath
Docker containers are small in size with only docker engine over the host OS
VMs can be easily moved to a new host OS
Containers are destroyed and recreated rather than moving (data volume is backed up)
Creating VMs take relatively long time
Dockers can be created in seconds
Virtualized Apps are harder to find and it takes more time to install and run them
Containerized apps such as SickBeard, Sonarr, CouchPotato etc. can be found and installed easily within minutes

Docker vs Linux LXC

Linux cgroups, originally developed by Google, govern the isolation and usage of system resources, such as CPU and memory, for a group of processes.

Linux namespaces, originally developed by IBM, wrap a set of system resources and present them to a process to make it look like they are dedicated to that process.

The original Linux container technology is Linux Containers, commonly known as LXC. LXC is a Linux operating system level virtualization method for running multiple isolated Linux systems on a single host. Namespaces and cgroups make LXC possible.

Single vs. multiprocess. Docker restricts containers to run as a single process. If your application environment consists of X concurrent processes, Docker wants you to run X containers, each with a distinct process. By contrast, LXC containers have a conventional init process and can run multiple processes.

Stateless vs. stateful. Docker containers are designed to be stateless, more so than LXC. First, Docker does not support persistent storage. Docker gets around this by allowing you to mount host storage as a “Docker volume” from your containers. Because the volumes are mounted, they are not really part of the container environment.

Second, Docker containers consist of read-only layers. This means that, once the container image has been created, it does not change. During runtime, if the process in a container makes changes to its internal state, a “diff” is made between the internal state and the image from which the container was created. If you run the docker commit command, the diff becomes part of a new image—not the original image, but a new image, from which you can create new containers. Otherwise, if you delete the container, the diff disappears.

Docker Terminology - Basics

An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
container is a runtime instance of an image—what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.
Containers run apps natively on the host machine’s kernel.

Docker daemon - The background service running on the host that manages building, running and distributing Docker containers.
Docker client - The command line tool that allows the user to interact with the Docker daemon.
Docker Store - A registry of Docker images, where you can find trusted and enterprise ready containers, plugins, and Docker editions. You'll be using this later in this tutorial.

An important distinction with regard to images is between base images and child images.
·         Base images are images that have no parent images, usually images with an OS like ubuntu, alpine or debian.
·         Child images are images that build on base images and add additional functionality.
Another key concept is the idea of official images and user images. (Both of which can be base images or child images.)
·         Official images are Docker sanctioned images. Docker, Inc. sponsors a dedicated team that is responsible for reviewing and publishing all Official Repositories content. This team works in collaboration with upstream software maintainers, security experts, and the broader Docker community. These are not prefixed by an organization or user name. In the list of images above, the pythonnodealpine and nginx images are official (base) images. To find out more about them, check out the Official Images Documentation.
·         User images are images created and shared by users like you. They build on base images and add additional functionality. Typically these are formatted as user/image-name. The user value in the image name is your Docker Store user or organization name.
A registry is a collection of repositories, and a repository is a collection of images

Getting Started on AI (Artificial Intelligence) , ML (Machine Learning) & Data Science

Angular 2 Proxy Configuration (CORS Headers Issue for local testing)

As part of integrating the WebSphere Commerce REST services in Angular2 application, created bunch of WCS MOCK REST Services for unit testing using SOAP UI. It helps to test multiple scenarios (different data and fail scenarios  ...etc).

You can use SOAPUI or other tools to mock the REST services during the development.

1.       Create and Start mock services (or you may backend services running on different port)

While creating the MOCK Service, I have given following port and host address.


Start the mock services by clicking “run” button, then you can access mock services on http://localhost:4646 

Problem : You will see the problem similar to below when you try to access these MOCK services from the Angular2 App because of cross domain (same origin) policy issues, We can mitigate this by adding Cross-Origin-Resource-Sharing  (CORS) headers in server level , if you backend (api server) supports.


Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:4646/wcs/resources/store/14321/cart/shipping_modes. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).

Solution:  As I am not using the any backend API server running in my local for adding CORS headers, thought to use the angular2 proxy configuration to bypass this errors.

1.       Create proxy.config.json file under project root directory

2.       Add following in proxy configuration file
  "/localapi/*": {
    "target": "http://localhost:4646",
    "pathRewrite": {
      "^/localapi": ""
    "changeOrigin": true,
    "secure": false,
    "logLevel": "debug"

3.       Then start the local angular dev server as shown below
a.       Using NPM START
Add “—proxy-config proxy.config.json” in package.json as shown below.

b.       Or Using NG SERVE
:/> ng serve –proxy-config proxy.config.json

4.       Use the “/localapi” context while making HTTP call from the angular 2
http.get("/localapi/wcs/resources/store/14321/cart/shipping_modes", options)

5.       Now when you access the application then browser makes call on 4200 port and get response from SOAP UI mock service that is running on 4646.