praveen.online

Docker Compose

Docker Compose is a powerful tool for managing multi-container applications. It simplifies the process of defining, running, and scaling your entire application stack by using a single YAML configuration file. By defining your application's services, networks, and volumes in a Compose file, you can easily create and manage your entire environment with a single command. This streamlines development, testing, and deployment, making it easier to collaborate and iterate on your projects.

Docker Compose File

A Compose file, typically named compose.yaml or docker-compose.yaml, is placed in your working directory. While compose.yaml is the preferred format, Compose supports docker-compose.yaml for backward compatibility. If both files exist, Compose will prioritize compose.yaml.

To start all the services defined in your compose.yaml file

# docker compose up

To stop and remove the running services.

# docker compose down

If you want to monitor the output of your running containers and debug issues, you can view the logs with,

# docker compose logs

To lists all the services along with their current status

# docker compose ps

Execute command in dry run mode

--dry-run

Specify an alternate environment file

--env-file

Multiple Compose files

-f, --file

Project name

-p, --project-name

Sample docker compose file

services:
  frontend:
    image: example/webapp
    ports:
      - "443:8043"
    networks:
      - front-tier
      - back-tier
    configs:
      - httpd-config
    secrets:
      - server-certificate

  backend:
    image: example/database
    volumes:
      - db-data:/etc/data
    networks:
      - back-tier

volumes:
  db-data:
    driver: flocker
    driver_opts:
      size: "10GiB"

configs:
  httpd-config:
    external: true

secrets:
  server-certificate:
    external: true

networks:
  # The presence of these objects is sufficient to define them
  front-tier: {}
  back-tier: {}

Usage

multiple compose files

# docker compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db

other options

# docker compose --env-file sample.env -f sample-docker-compose.yml -p docker-compose up -d

Docker

Docker is a platform that allows you to build, ship, and run applications in containers. It's essentially a tool that packages your application and its dependencies into a standardized unit called a container. This container can then be run consistently across different environments, such as development, testing, and production.

See below to see the full list of docker commands
Docker CLI reference

Let's see the most frequent commands used with docker

Image Management

Pulls 'hello-world' from a Docker registry (e.g., Docker Hub).

# docker pull hello-world

Builds an image from a Dockerfile.

# docker build path_to_Dockerfile

To Push local_image to a Docker registry.

# docker push local_image

Lists all images on your system.

# docker images

Container Management

Creates and starts a container from an image.

# docker run <image name>

Starts a stopped container.

# docker start <container_id>

Restarts a container.

# docker restart <container_id>

Removes a container.

# docker rm <container_id>

Lists all running containers.

# docker ps

Lists all containers, including stopped ones.

# docker exec <container_id> <command>

Network Management

Creates a new network.

# docker network create <network_name>

Lists all networks.

# docker network ls

Removes a network.

# docker network rm <network_name>

Volume Management

Creates a new Volume.

# docker volume create <volume_name>

Lists all volumes.

# docker volume ls

Removes a Volume.

# docker volume rm <volume_name>

Other useful commands

Displays detailed information about a container.

# docker inspect <container_id>

Creates a new image from a container.

# docker commit <container_id> <image_name>

Removes unused containers, images, and networks.

# docker system prune

help command

# docker --help

Docker run command line options

Run with a custom name

# docker run -it --name test busybox sh

Run an interactive shell in a new container:

# docker run -it ubuntu bash

Run a detached container and publish port 80:

# docker run -d -p 80:80 nginx

Mount a host directory into the container:

# docker run -v /path/to/host/dir:/path/to/container/dir nginx

Change entrypoint, to sleep for 60

# docker run --command sleep 60 ubuntu

Run with multiple options

# docker run -it --rm --name my-container --network my-network --dns 8.8.8.8 --cpu-limit 2 --memory 1g --device /dev/sda1 -e MY_VAR=value ubuntu bash

Explanation

Interactive and TTY: -it
Remove container on exit: --rm
Custom name: --name my-container
Custom network: --network my-network
Custom DNS server: --dns 8.8.8.8
CPU limit: --cpu-limit 2
Memory limit: --memory 1g
Device access: --device /dev/sda1
Environment variable: -e MY_VAR=value
Image: ubuntu
Command: bash

ngrep command

MARCH 20, 2018

Network grep or 'ngrep', is a tool which performs most of GNU grep's common features, applying them to the network layer. As per Linux man page, ngrep is a pcap-aware tool that will allow you to specify extended regular expressions to match against data payloads of packets. It currently recognizes TCP, UDP and ICMP across Ethernet, PPP, SLIP, FDDI and null interfaces, and understands bpf filter logic in the same fashion as more common packet sniffing tools, such as tcpdump and snoop.

Basic Usage

$ ngrep 'Linux' -q

The above command will filter the packets which contain the word 'Linux'. The option 'q' as per man page 'Be quiet; don't output any information other than packet headers and their payloads (if relevant)'. It is good to have -q every time.

We can add more searching options which we will be using with grep command. Few examples are below.

$ ngrep -i 'Linux' -q // case-insensitive
$ ngrep -iv 'Linux' -q // case-insensitive and inverse match
$ ngrep -wi 'Linux' -q // case-insensitive exact word 'linux'
$ ngrep -W byline -q

The 'byline' option is a nice one like 'q' which will print the detail in a good format which is easy to read. Other option available are 'normal|single|none'. As per my opinion, the most useful option is byline.

Options

$ ngrep -W normal -q
$ ngrep -W single -q
$ ngrep -W none -q
Commonly Used 'bpf' filter options
$ ngrep -q 'req' 'host 192.168' // matches all headers containing the string 'req' sent to or from the ip address starting with 192.168

$ ngrep -q 'req' 'dst host 192.168' // will do same as above, but instead match a destination host

$ ngrep -q 'req' 'src host 192.168' // will do same as above, but instead match a source host
Protocol matching
$ ngrep -q 'req' 'tcp'
$ ngrep -q 'req' 'udp'
$ ngrep -q 'req' 'icmp'
Examples
[root@localhost ~]# ngrep 'testProject' 'tcp' -W byline -q 
interface: eth0 (10.10.20.0/255.255.255.0)
filter: (ip or ip6) and ( tcp )
match: testProject
T 112.110.93.73:25749 -> 10.10.20.62:80 [AP]
POST /testProject/test/1.0.0 HTTP/1.1.
Cookie: JSESSIONID=CB3ECA25FFD315BFD3774EB4D6B461FA.e; Path=/testProject/; HttpOnly.
LOGINTYPE: loginType=POST.
Content-Length: 256.
Content-Type: application/x-www-form-urlencoded; charset=UTF-8.
Host: test.praveen-vp.com.
Connection: Keep-Alive.
.
{"request":{"value": "test","response_format":"json"}}
        
[root@localhost ~]# ngrep 'testProject' -W byline -q -t 
interface: eth0 (10.10.20.0/255.255.255.0)
match: testProject

T 2018/03/20 11:25:59.214139 42.106.242.219:34954 -> 10.10.20.62:80 [AP]
POST /testProject/test/1.0.0 HTTP/1.1.
Cookie: JSESSIONID=1E0A606786521013AFEFCA51817A2116.b; Path=/testProject/; HttpOnly.
Content-Length: 256.
Content-Type: application/x-www-form-urlencoded; charset=UTF-8.
Host: test.praveen-vp.com.
Connection: Keep-Alive.
.
{"request":{"value": "test","response_format":"json"}}
ngrep with port
[root@localhost ~]# ngrep 'testProject' 'tcp' port 9030 -W byline -q -t 
interface: eth0 (10.10.20.0/255.255.255.0)
filter: (ip or ip6) and ( tcp port 9030 )
match: testProject

T 2018/03/20 11:47:01.927515 10.10.20.62:36746 -> 10.10.56.233:9030 [AP]
POST /testProject/login/1.0.0 HTTP/1.0.
X-Real-IP: 192.168.10.195.
X-Forwarded-For:192.168.10.195.
Host:test.praveen-vp.com.
Connection: close.
Content-Length: 198.
Cookie: JSESSIONID=B9D6B5DC553C128FCFE690B538A723C4.a; Path=/testProject/; Secure; HttpOnly.
LOGINTYPE: loginType=PRE.
Content-Type: application/x-www-form-urlencoded; charset=UTF-8.
.
{"request":{"value": "test","response_format":"json"}}
ngrep the database port
[root@localhost]# 
[root@localhost]# ngrep 'REGISTRATION' port 1521 -q -W byline -T
interface: eth1 (10.10.56.128/255.255.255.128)
filter: (ip or ip6) and ( port 1521 )
match: REGISTRATION

T +58.501281 10.10.56.233:39990 -> 10.10.56.231:1521 [AP]
...........i.......^...)................
.......................select CREATED_AT, LAST_LOGGED_AT FROM
 REGISTRATION_TABLE WHERE APP_ID = :1 AND STATUS = 'ACT' ORDER BY CREATED_AT DESC 
.................... .............. .......... 3ceb10b1d80acc72c0f62681e0045859

Linux Networking Commands

March 10, 2018

ping

The ping command sends echo requests to the host you specify on the command line, and lists the responses received their round trip time. ping will send echo request indefinitely until you stop by ctrl+c (SIGINT). You can also add -c option to send a fixed number of requests.

ping

telnet

The telnet command is used for interactive communication with another host using the TELNET protocol. It begins in command mode, where it prints a telnet prompt (“telnet> “). If telnet is invoked with a host argument, it performs an open command implicitly; see the description below.

telnet

Screenshot from 2018-03-10 13-40-09

netstat

Displays contents of /proc/net files. It works with the Linux Network Subsystem, it will tell you what the status of ports are ie. open, closed, waiting, masquerade connections and few other details.

netstat-2

tcpdump

This will capture packets off a network interface and interprets them for you. It understands all basic internet protocols, and can be used to save entire packets for later inspection.

usage examples :
# tcpdump port 22
# tcpdump dst 192.168.65.133 and tcp -vv

tcpdump-1

tcpdump-2

hostname

Tells the user the host name of the computer they are logged into.

$ hostname

hostname

traceroute

traceroute will show the route of a packet. It attempts to list the series of hosts through which your packets travel on their way to a given destination.

# traceroute google.com
# traceroute 192.168.22.133

traceroute

nmap

It is a poweful network exploration tool and security scanner, nmap is a very advanced network tool used to query machines (local or remote) as to whether they are up and what ports are open on these machines.

# nmap -v -A scanme.nmap.org

nmap

iftop

iftop – display bandwidth usage on an interface by host

ifconfig

ifconfig is used to configure the kernel-resident network interfaces. It is used at boot time to set up interfaces as necessary. After that, it is usually only needed when debugging or when system tuning is needed.

If no arguments are given, ifconfig displays the status of the currently active interfaces. If a single interface argument is given, it displays the status of the given interface only; if a single ‘-a’ argument is given, it displays the status of all interfaces, even those that are down. Otherwise, it configures an interface.

ifconfig

ifconfig-1

iwconfig

iwconfig is similar to ifconfig , but is dedicated to the wireless interfaces.

iwconfig

ifup/ifdown/ifquery

ifup – bring a network interface up.
ifdown – take a network interface down
ifquery – parse interface configuration

usage examples

$ ifdown eth0
$ ifup eth0
$ ifquery eth0

host

Performs a simple lookup of an internet address (using the Domain Name System, DNS).

host

dig

The “domain information groper” tool. More advanced then host. If you give a hostname as an argument to output information about that host, including it’s IP address, hostname and various other information.

To find the host name for a given IP address (ie a reverse lookup), use dig with the `-x’ option. dig takes a huge number of options (at the point of being too many), refer to the manual page for more information.

dig-1

dig-2

whois

whois is used to look up the contact information from the “whois” databases, the servers are only likely to hold major sites.

wget

(GNU Web get) used to download files from the World Wide Web. To archive a single web-site, use the -m or –mirror (mirror) option. Use the -nc (no clobber) option to stop wget from overwriting a file if you already have it. Use the ‘-c’ or’ –continue’ option to continue a file that was unfinished by wget or another program.

Simple usage example: wget url_for_file
This would simply get a file from a site.

wget has many more options refer to the examples section of the manual page, this tool is very well documented.

Screenshot from 2018-03-10 13-56-46.png

curl

curl is another remote downloader. This remote downloader is designed to work without user interaction and supports a variety of protocols, can upload/download and has a large number of tricks/work-arounds for various things. It can access dictionary servers (dict), ldap servers, ftp, http, gopher, see the manual page for full details.

To access the full manual (which is huge) for this command type:

curl -M
For general usage you can use it like wget. You can also login using a user name by using the -u option and typing your username and password like this:

curl -u username:password http://www.placetodownload/file

Remote Login

ssh

ssh (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine. It is intended to provide secure encrypted communications between two untrusted hosts over an insecure network. X11 connections, arbitrary TCP ports and UNIX-domain sockets can also be forwarded over the secure channel.

ssh

scp

scp copies files between hosts on a network. It uses ssh(1) for data transfer, and uses the same authentication and provides the same security as ssh(1). scp will ask for passwords or passphrases if they are needed for authentication.
scp

sftp

sftp is an interactive file transfer program, similar to ftp(1), which performs all operations over an encrypted ssh(1) transport. It may also use many features of ssh, such as public key authentication and compression. sftp connects and logs into the specified host, then enters an interactive command mode.

Grep Command

March 3, 2018

Grep is a powerful tool which allow you to search a word through command line.In this post we can see some grep command usage examples. Grep command is basically for searching a string in a given file.

By default, grep prints the matching lines.

Syntax
$ grep [options] pattern [file ...]

Add ‘–color=auto‘ option to get colored output.

Grep command options.

Case-insensitive searching

$ grep -i 'string_to_search' file_name

Searching with wild character

$ grep -i 'search key.*' file_name

the above command will list every line starting with search key what ever after this is not a problem.

More combinations like

$ grep 'search.*key' file_name
$ grep '.*search key' file_name

are possible.

Searching for exact string not as a sub strings

$ grep -w 'search key' file_name

Searching recursively in a directory.

$ grep -r 'search key' directory

Invert mach finding

$ grep -v 'search key' file_name

Multiple match finding

$ grep -e 'search key1' -e 'search key2' file_name

Show only list of files that contains search key

$ grep -l 'search key' file_name

Finding number of matches in file for ‘search key’

$ grep -c 'search key' file_name

add ‘w’ option to find exact match.

$ grep -wc ‘search key’ file_name

Get the line number with grep out put

$ grep -n 'search key' file_name

Show lines after the match

$ grep -A <N> 'search key' file_name

the above command will show ‘N’ lines after the match.

Show lines before the match

$ grep -B <N> 'search key' file_name

the above command will show ‘N’ lines before the match.

Show lines around the match

$ grep -C <N> 'search key' file_name

the above command will show ‘N’ lines around the match.

Basic Linux Commands

Feb 28, 2018

ls

List the files in the directory. You can add ‘-a’ option to show all files including hidden files. Adding the directory path will list files in the mentioned directory.

Some commonly used options for ls.

$ ls -l
$ ls -la
$ ls -lrt (reverse sort with the time of change in the files)
$ ls -lrth ('h' options to list files in print sizes in human-readable format)

Screenshot from 2018-02-28 20:29:02

cd

Change directory, move to another directory. cd command without any directory name will redirect to the home directory.

pwd

Print working directory.

Screenshot from 2018-02-28 20:42:46

rm

delete files and directories, directories need to add ‘-r’ option, like rm -r folder_name

mkdir

the command to create a directory.

touch

The touch command is used to create a file.

Screenshot from 2018-02-28 22:52:21

cp

cp command is used to copy a file or directory. To copy files from a directory recursively need to add ‘-r’ option. We need to specify the source and destination to the command, like cp 1.txt 2.txt

Screenshot from 2018-02-28 22:59:18

mv

mv command is used to move files or directories through the command line. We can use the same command to rename a file.

Screenshot from 2018-02-28 23:02:07

echo

echo command simply returns whatever it is given to it, like an echo. It is helpful in some data operations with the help of redirection operation.

cat

cat command is used to display contents of a file. It is very useful for small files.

Screenshot from 2018-02-28 23:07:49

man & –help

To know more details about a command and how to use it, use the man command. It shows the manual pages of the command. eg ‘man cp’ will show the details of the cp command.

Passing the help argument to a command will give similar information.

$ man cp

$ cp -help

Compiling Linux Kernel from Source code

November 12, 2014

Compiling Linux kernel source code, it will be a nice thing to compile a Linux kernel source code to build your own custom kernel.

It can be done in very few steps as described below.


Step 1:

Download the latest stable kernel from here: https://www.kernel.org/

Make sure you have downloaded the complete kernel not any patch. Or in terminal you can download it by typing

wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.16.7.tar.xz

Step 2:

Extract the compressed file.

tar xf linux-3.16.7.tar.xz

Go to directory linux-3-16.7.

cd linux-3.16.7

Step 3: Install ncurses library

$ sudo apt-get install libncurses5-dev

This helps to configure your kernel in command line menu based interface.

Step 4: Build the kernel configuration file

There are three ways to build a Linux kernel.

1. make oldconfig

2. make menuconfig

3. make qconfig/gconfig/xconfig

We use make menuconfig

now run make menuconfig command.

In the open window you can configure the options for file system, network, input output devices, and so on ...

You may not know what to select, google it, and find out what they are, in every possible way. And save the configuration file and rename it as “.config”. Or You can copy the current kernel configuration to present working directory, and load it in the menu and edit if you need, save it and rename as “.config”.

Step 5: Compile the kernel

Run make command and wait

For compiling kernel modules run make modules command.

It will take much time depending upon your system. After that you can test your kernel using qemu.

In terminal : qemu-system-x86_64 -kernel directory/linux-3.16.3/arch/x86_64/boot/bzImage

If every thing is in order you can see the kernel booting in qemu and our kernel doesn't have an initial file system, you can see kernel panic message.

it is advisable to change to your directory. That’s all and be patient while compiling kernel, it will take a while!!