Poonehmgh / inception

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

inception

These are my notes when I started doing this project. There were a lot of stuff I didnt know, I tried to document them and keep them here, maybe it is helpful for you. It is structured in 3 different parts: - Need to Know: the concepts - The Project: the steps I did in this project, one by one to make containers running and working as expected. - Good to Know: some extra concepts I learned, that helped me understand what is happening under the hood. Also some evaluation codes and advice is there.

Need to know

Containerization

Containerization is a method of operating system-level virtualization that involves packaging an application along with its dependencies into a self-contained unit that can be executed consistently across different computing environments. Containerization provides a standardized and portable approach to deploying and running applications.

alt text

Docker

a good ref on starting: https://docs.docker.com/get-started/

Docker is an open-source platform that allows you to automate the deployment, scaling, and management of applications using containerization. It provides a way to package an application and its dependencies into a standardized unit (= a container).

Here are some key features of Docker:

  1. Containerization: Docker utilizes containerization technology, allowing you to package an application and its dependencies into a container.

  2. Portability: Docker containers are highly portable. Once a container is created, it can be run on any machine or cloud platform that supports Docker, without worrying about differences in underlying infrastructure or operating systems. This eliminates the "works on my machine" problem.

  3. Isolation: Docker provides process-level isolation, allowing containers to run independently without interfering with each other or the host system.

  4. Efficiency: Docker enables efficient resource utilization by sharing the host operating system's kernel and resources among containers. This eliminates the need for running separate virtual machines (VMs) for each application, resulting in reduced overhead and improved efficiency.

  5. Version Control: Docker images and containers can be versioned, allowing you to track changes and roll back to previous versions if needed. This enables easier collaboration among team members and facilitates the reproducibility of application environments.

  6. Scalability: Docker simplifies application scaling. You can replicate and distribute containers across multiple hosts, allowing for horizontal scaling and efficient resource allocation. Docker also integrates with orchestration tools, such as Docker Swarm and Kubernetes, to manage containerized applications at scale.

  7. Dependency Management: Docker simplifies dependency management by encapsulating an application and its dependencies within a container. This eliminates conflicts between different versions of software libraries or dependencies, making it easier to manage complex application stacks.

  8. Ecosystem and Community: Docker has a vibrant ecosystem and a large community. It provides a vast collection of pre-built Docker images available from Docker Hub, a public registry. Additionally, the Docker community actively contributes to the development of tooling, libraries, and best practices.

there are other containerization technologies and platforms apart from Docker. While Docker is the most widely known and commonly used containerization solution, there are alternatives like Podman, rkt, runc and etc.

Docker piggybacks off of features in the Linux kernel to perform its magic. Because of this reliance on the Linux kernel, it’s important to note that Docker only runs on Linux. For instance, if you develop on an Apple computer (which uses a Darwin/BSD Kernel), you’ll need to install a lightweight Linux virtual machine before being able to use Docker.

Docker, by default, runs natively on Linux operating systems because it leverages certain features of the Linux kernel, such as namespaces and control groups, for containerization. This means that if you have a Linux machine, you can use Docker without any additional setup. However, Docker also provides solutions for non-Linux machines, such as Windows and macOS, through Docker Desktop. Docker Desktop provides a lightweight virtualization environment that runs a Linux virtual machine (LinuxKit) under the hood. This Linux virtual machine enables Docker to function on non-Linux operating systems.

So, while Docker itself relies on Linux kernel features, Docker Desktop provides a way for developers to use Docker on non-Linux machines without requiring a separate Linux installation or virtual machine setup

Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality. Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container. some good refs: https://www.codementor.io/blog/docker-technology-5x1kilcbow https://docs.docker.com/get-started/

Docker image

A Docker image is a read-only template used to create Docker containers. It contains the instructions for creating a container. Docker images are built from a set of instructions defined in a Dockerfile, which specifies how to assemble the image.

alt text

Alt Text

Docker compose

Docker Compose is a tool that allows you to define and manage multi-container applications. It is a separate component of Docker that works in conjunction with Docker Engine to simplify the orchestration of multiple containers that make up an application.

With Docker Compose, you can define the configuration of your application services, including their dependencies, network connections, volumes, and other settings, in a YAML file called docker-compose.yml. This file serves as a declarative specification for your application's infrastructure.

Here are some key features and capabilities of Docker Compose:

  1. Multi-Container Applications: it is designed for managing multi-container applications. - You can define multiple services within a single docker-compose.yml file, where each service represents a separate containerized component of your application.

  2. Service Definition: In the docker-compose.yml file, you can specify the Docker images, environment variables, ports, volumes, network connections, and other configuration details for each service. Docker Compose uses this information to create and manage the containers accordingly.

  3. Dependency Management: it allows you to define and manage dependencies between services.

  4. Container Orchestration: it simplifies the orchestration of multiple containers. With a single command (docker-compose up), it can create and start all the containers defined in the docker-compose.yml file, automatically handling the necessary network connections and configurations.

  5. Environment Variables: it supports the use of environment variables within the docker-compose.yml file. This allows you to parameterize your configuration and dynamically pass values to containers at runtime, making your application more flexible and configurable.

  6. Networking and Volumes: it handles the creation and management of networks and volumes required by your application. It can create isolated networks for your services and automatically manage the associated DNS resolution. It also facilitates the configuration of shared volumes between containers.

  7. Development and Testing Environments: it is commonly used for local development and testing environments. It provides an easy way to define the required services and their configurations, allowing developers to spin up the entire application stack with a single command and replicate the production environment locally.

alt text

cheatsheet: https://dockerlabs.collabnix.com/docker/cheatsheet/

.YML

.yml (YAML) is a human-readable data serialization format. It stands for "YAML Ain't Markup Language." YAML is often used for configuration files and data exchange between programming languages. Data serialization refers to the process of converting data structures or objects into a format that can be easily stored, transmitted, or reconstructed later. In other words, it is the transformation of complex data into a serialized form that can be saved to a file, sent over a network, or stored in a database.

Docker engine

Docker Engine, also known as Docker runtime, is the core component of the Docker platform. It is responsible for building, running, and managing Docker containers. Docker Engine combines several essential elements that enable containerization and provides the necessary tools and services to work with Docker.

The Docker Engine consists of three main parts:

  1. Docker Daemon (dockerd)
  2. RESTful API
  3. Command-Line Interface (CLI) Alt Text It's important to note that the term "Docker Engine" is sometimes used interchangeably with "Docker" itself, as it represents the core functionality of the Docker platform. However, Docker is a broader ecosystem that includes additional components like Docker Compose, Docker Swarm, and Docker Registry, which extend the capabilities of Docker Engine for orchestration, scaling, and image distribution.

Docker daemon

a daemon refers to a background process or service that runs continuously and performs specific tasks or functions.

  1. Background Process: A daemon runs as a background process without requiring user intervention. It typically starts automatically when the system boots up and continues to run until the system shuts down.

  2. Long-Running: Daemons are designed to run indefinitely, providing continuous services or performing recurring tasks. They often have no specific termination point.

  3. Independent and Headless: Daemons usually operate independently of direct user control or interaction. They do not have a user interface and typically do not receive input from or display output to users directly. Instead, they provide services or perform tasks in the background, often in response to system events or requests.

  4. Service-Oriented: Daemons are often responsible for providing specific services or functionality to other programs or users.

  5. Managed by the System: Daemons are typically managed by the operating system, which starts and stops them as needed.

  6. Runs as a Process: Technically, a daemon is a type of process.

the background service that runs on a host machine and manages Docker containers and images. It is a central component of the Docker platform responsible for building, running, and monitoring containers.

The Docker daemon, also known as dockerd, acts as a server process that listens to the Docker API requests from clients and performs the necessary actions to manage containerized applications. It runs as a background process and continuously runs on the host operating system.

Docker client

The Docker client, often referred to as the Docker CLI (Command-Line Interface), is a command-line tool that provides a user-friendly interface for interacting with the Docker Engine.

alt text

REST API

REST (Representational State Transfer) is an architectural style for designing networked applications. RESTful APIs (Application Programming Interfaces) are interfaces that adhere to the principles of REST. They provide a standardized way for different software systems to communicate and interact with each other over the internet.

Docker Engine exposes a RESTful API that allows clients (such as the Docker CLI) to interact with the Docker daemon. The API provides a set of endpoints and commands that enable users to manage containers, images, volumes, networks, and other Docker resources programmatically.

Command-Line Interface (CLI)

The Docker CLI is a command-line tool that allows users to interact with the Docker Engine and perform various operations related to containers and images. Users can issue commands to build, run, stop, start, inspect, and manage Docker containers using the Docker CLI.

Docker registry

A Docker registry is a central repository that stores Docker images. It is a server-side application that allows users to store and distribute Docker images to be used across different environments and by multiple users.

The Registry is open-source, under the permissive Apache license.

docker hub

a public registry provided by Docker.

Docker vs VM

Docker and virtual machines (VMs) are both technologies used for virtualization, but they have distinct differences in their approach and architecture.

alt text

alt text

Docker containers use operating system-level virtualization, running on a single host operating system and share the host kernel, libraries, and resources and providing process-level isolation.

  • Virtual machines use hardware-level virtualization, offering full operating system isolation. They emulate complete computer systems, including virtual CPUs, memory, storage, and network interfaces. Each virtual machine runs its own full-fledged operating system on top of a hypervisor, which manages the hardware resources.

alt text

Volumes

volumes are a feature that allows data to be persistently stored and shared between containers and the host machine. Volumes provide a way to manage and handle persistent data in Docker containers, ensuring that data is preserved even when containers are stopped, removed, or replaced. Volumes are commonly used for scenarios where persistent data storage is required, such as databases, file uploads, application configuration files, or any other data that needs to be preserved across container restarts or upgrades.

By utilizing volumes, Docker makes it easier to manage and handle persistent data within containers, separating the concerns of application logic from data storage and ensuring that important data is preserved and accessible throughout the container lifecycle.

alt text

A Docker image is a collection of read-only layers. When you launch a container from an image, Docker adds a read-write layer to the top of that stack of read-only layers. Docker calls this the Union File System. Any time a file is changed, Docker makes a copy of the file from the read-only layers up into the top read-write layer. This leaves the original (read-only) file unchanged. When a container is deleted, that top read-write layer is lost. This means that any changes made after the container was launched are now gone. A Docker volume “lives” outside the container, on the host machine. From the container, the volume acts like a folder which you can use to store and retrieve data.

Docker volumes can be created using docker volume create some_name command (By default, Docker creates the volume in the Docker host's volume storage area. MacOS: /var/lib/docker/volumes/) But when we run docker-compose, the volumes specified in the volumes section of the Docker Compose file will be created automatically. Docker Compose manages the creation and management of volumes for you based on the configuration provided.

alt text

a good source on volumes: https://www.ionos.com/digitalguide/server/know-how/docker-container-volumes/

Nginx

Nginx is a software that helps websites and applications work better and handle lots of visitors. It can be used as a web server or a middleman between the visitors and the actual servers that store the website or app. Nginx makes websites load faster, distributes the workload across servers, and helps keep everything running smoothly.

It's a popular open-source web server and reverse proxy server software. It is designed to efficiently handle high traffic websites and applications, offering excellent performance, scalability, and reliability.

Nginx can now function as a web server, load balancer, reverse proxy server, mail proxy server, and more. **Nginx is often used as a front-end proxy to distribute incoming traffic across multiple backend servers, improving the overall performance and reliability of web applications.

Some key features of Nginx include:

  1. High performance: Nginx uses an asynchronous, event-driven architecture that allows it to efficiently handle a large number of concurrent connections while consuming fewer system resources compared to traditional web servers.

  2. Load balancing: Nginx can distribute incoming traffic across multiple servers, helping to evenly distribute the workload and improve the responsiveness of web applications.

  3. Reverse proxying: It can act as a reverse proxy server, sitting between clients and backend servers. This allows Nginx to handle requests on behalf of the backend servers, providing features such as SSL termination, caching, and request routing.

  4. Caching: Nginx includes built-in caching capabilities that can help reduce the load on backend servers by serving static content directly from memory or disk.

  5. SSL/TLS termination: Nginx can handle SSL/TLS encryption and decryption, offloading this processing from backend servers and improving overall performance.

  6. Virtual hosting: Nginx supports virtual hosting, allowing multiple websites or applications to be hosted on a single server.

alt text

SSL Certificate

An SSL certificate (Secure Socket Layer), also known as a digital certificate, is a small data file that binds a cryptographic key to an organization's or individual's information. It is used to secure and authenticate the communication between a website and its visitors. SSL certificates enable the HTTPS protocol, which encrypts the data transmitted between the web server and the user's browser, ensuring privacy and integrity.

SSL certificates are what enable websites to use HTTPS, which is more secure than HTTP. An SSL certificate is a data file hosted in a website's origin server. SSL certificates make SSL/TLS encryption possible, and they contain the website's public key and the website's identity, along with related information. Devices attempting to communicate with the origin server will reference this file to obtain the public key and verify the server's identity. The private key is kept secret and secure. SSL, more commonly called TLS, is a protocol for encrypting Internet traffic and verifying server identity. SSL certificates include the following information in a single data file: The domain name that the certificate was issued for Which person, organization, or device it was issued to Which certificate authority issued it The certificate authority's digital signature Associated subdomains Issue date of the certificate Expiration date of the certificate The public key (the private key is kept secret)

A website needs an SSL certificate in order to keep user data secure, verify ownership of the website, prevent attackers from creating a fake version of the site, and gain user trust.

alt text

alt text

Open SSL

An OpenSSL certificate refers to a certificate generated using the OpenSSL toolkit, which is a widely used open-source library for implementing secure communication protocols. OpenSSL provides tools for generating self-signed certificates, which are certificates that are not issued by a trusted third-party certificate authority but are instead signed by the entity itself.

self-signed certificates are not trusted by default in web browsers, as they are not issued by a recognized certificate authority. When accessing a website using a self-signed certificate, the browser will display a warning to the user. However, for testing purposes or internal use, self-signed certificates can be sufficient.

Project

What are we doing in this project?

The overall goal of the project is to set up a local web infrastructure using Docker. By configuring NGINX, WordPress, and MariaDB in separate containers, we will create a fully functioning website accessible through a domain name (: login.42.fr) or localhost. We'll be able to create posts, pages, and upload files, ensuring that the data is stored in the MariaDB database and served by NGINX through the WordPress application.

We will have a working local web server environment capable of hosting and managing a website using WordPress and MariaDB.

NGINX serves as the web server of our website in this project. It receives incoming HTTP requests and handles the routing and serving of web pages. NGINX acts as the software running on our local machine, functioning as the server for our website. MariaDB is the database management system used in this project. It stores and manages the website's data, including posts, pages, and other content created using WordPress. MariaDB ensures that your changes are saved and allows for efficient retrieval and storage of data. WordPress is used for designing and managing webpages. It is a tool for creating and editing content, including posts, pages, and media files. WordPress interacts with the MariaDB database to store and retrieve data.

1-VM

First, we need to do everything the project asks for, in a VM. So the very first step would be to install a VM. We can do it via VirtualBox which is open source and free. Here is no requirement to configure our machine to the command line, we can quite install a graphical interface like Gnome for example.

1. VM: I installed Debian 11, and considered 20 gig for my VM. (If the considered memory is not high enough, the software and the packages will not be installed correctly.) {tick these to be installed when asked during the installation : visual desktop + Gnome + SSH }

2. Docker on VM

Second, I installed the docker on my VM (guide here: https://docs.docker.com/engine/install/debian/)

3. Connected my host with my VM through SSH.

I installed SSH-server on my VM: sudo apt install openssh-server then add this rule to the port forwarding according to this reference: https://dev.to/developertharun/easy-way-to-ssh-into-virtualbox-machine-any-os-just-x-steps-5d9i and then in the host machine adding this command: ssh -p 3022 <username>@127.0.0.1 and then provide the password and the user name (normal user name of VM in small letters.) (By default, the SSH server on Debian does not allow direct root login via SSH for security reasons.) when connected with the VM through ssh, we can clone our repo in the VM through the terminal from the host and just edit our code normally on our machine. but we need to add the ssh key of the VM to our git repo first.

2-Nginx

NGINX acts as the software running on our local machine, functioning as the server for our website. It receives incoming HTTP requests and handles the routing and serving of web pages.

2.1 Docker file

the Dockerfile is a text file that contains a set of instructions to build a Docker image. It specifies the base image, installs necessary dependencies, configures the environment, and sets up the container for a specific service.

In the case of the NGINX Dockerfile, it defines the steps to build a Docker image that includes NGINX as the primary service.

Base image

The base image, is an OS image that is packed with the Docker container and everything in the container is based on this layer. In simple words we can say that since each container is a running process (with some added encapsulation features applied to it in order to keep it isolated from the host and from other containers) it needs a base OS to build its layers on. this OS is the base image.

alt text

My choice: Debian:buster

Debian:

  • Larger image size due to more installed packages and libraries
  • Provides a more comprehensive set of tools and packages
  • Suitable for applications with complex dependencies and compatibility requirements
  • Offers better support for older software versions
  • Good for general-purpose applications

Alpine:

  • Smaller image size, resulting in faster image downloads and reduced disk space usage
  • Lightweight and minimalistic design
  • Has a smaller attack surface, making it potentially more secure
  • Suitable for microservices, containerized environments, and resource-constrained systems
  • Requires Alpine-specific package management (apk) instead of apt-get used in Debian

by adding below line to our Dockerfile, our base image is defined.

FROM alpine:3.18
Installation

The first thing that we need to do, is to install the packages that we need. We should install their dependencies too.

RUN apk update && apk upgrade
RUN apk add nginx

RUN apk update && apk upgrade: Updates the package repositories and upgrades the existing packages inside the Docker image. RUN apk add nginx: Installs NGINX inside the Docker image. apk is a is the package manager used in Alpine Linux. It stands for "Alpine Package Keeper."

Configure Nginx

We need to replace the default NGINX configuration file with our own. We should create a new file for example named nginx.conf in the same directory. Inside nginx.conf, we can define NGINX configuration.

what is a config file?

A config file for NGINX is a file that contains configuration directives and settings for the NGINX web server. It specifies how NGINX should behave and handle various aspects of web serving, such as server listening ports, server names, request handling, proxy settings, caching, SSL/TLS configurations, and much more.

Overall, this NGINX configuration block sets up a basic HTTP server listening on port 80 and responding to requests for the "localhost" hostname. When a request is made to the root URL ("/"), NGINX will look for an "index.html" file in the specified root directory ("/usr/share/nginx/html") and serve it as the response.

The underlying concept behind this configuration is that NGINX acts as a web server, receiving incoming requests and responding with the appropriate files. The listen directive determines the port on which NGINX listens for requests, and the server_name directive specifies the hostname that triggers this server block. The location block allows you to define different configurations and behaviors for specific URL paths. In this case, it handles requests for the root URL and sets the root directory and index file to serve.

Listenning to a port, what does it mean? When we say a server is listening to a port, it means that the server is actively monitoring and waiting for incoming network connections on a specific communication endpoint known as a port. The server is waiting for requests or data packets to arrive on that port, and it will respond or handle those requests according to its configuration and the service running on that port.

A port is a virtual construct used in networking to identify specific services or processes running on a computer or server, port is a communication endpoint in an operating system that enables processes to establish network connections and exchange data. Ports are numbered from 0 to 65535 and are divided into different ranges. For example. Well-known ports are (0-1023): Reserved for standard services like HTTP (port 80), HTTPS (port 443), FTP (port 21), SSH (port 22), etc.

alt text

2.2 A simple test to run Nginx with docker

One easy way to make a nginx container running: I want to see a simple message, when I access my localhost:8000. For this to be done, first we write a config file:

1. first we make index.html in the parent directory.

Index.html is the default html file that will be shown on the browser. a simple message is printed with the below html code:

<!DOCTYPE html>

<html>

<head>

<title>Welcome to Nginx</title>

</head>

<body>

<h1>Welcome to Nginx!</h1>

</body>

</html>
2. Make this nginx config file
events {}

http {
	server {
		listen 80;
		server_name localhost;
	
	location / {
		root /usr/share/nginx/html;
		index index.html;
		}
	}
}

The http block contains the main configuration for the HTTP server. Within the http block, there is a server block that defines the server-level configuration.

  • listen 80; specifies that Nginx should listen on port 80 for incoming HTTP requests. Port 80 is the default port for HTTP traffic.

  • server_name localhost; sets the server name associated with this server block. In this case, it is set to localhost, which means that this server block will handle requests directed to localhost.

  • location / { ... } defines the configuration for the root location / of the server. The location block allows you to specify how Nginx should handle requests for specific URLs or directories.

    • root /usr/share/nginx/html; sets the root directory for the server. It specifies the directory from which Nginx will serve files in response to requests.

    • index index.html; specifies the default file to serve when a directory is requested. In this case, if the client requests the root URL /, Nginx will look for an index.html file in the root directory and serve it if found.

This configuration sets up a basic HTTP server that listens on port 80, responds to requests for the localhost hostname, and serves files from the /usr/share/nginx/html directory. The index.html file in that directory will be served as the default file for the root URL.

3. Write the dockerfile:

For example, base image of alpine:3.17 is used:


FROM alpine:3.17

# Install Nginx
RUN apk update && apk add nginx

# Copy the Nginx configuration file
COPY conf/nginx.conf /etc/nginx/nginx.conf  

# Copy the HTML file
COPY index.html /usr/share/nginx/html/index.html
  

# Expose port 80 for HTTP
EXPOSE 80

# Start Nginx
CMD ["nginx", "-g", "daemon off;"]

docker build -t example . and then docker run -d -p 8000:80 example will make the container named example running. Now if we type localhost:8000 we will see the index.html page, which shows the welcome message.

By running this configuration, we can access the index page by visiting localhost:8000 in our web browser, as specified in the docker run command with the port mapping -p 8000:80.

alt text

2.3 SSL certificates

The SSL certificates are necessary for enabling HTTPS and establishing a secure encrypted connection between the server and the client's browser. The ssl_certificate and ssl_certificate_key directives in nginx conf file are key parts in a conf file to introduce these certificates.

  1. ssl_certificate: This directive specifies the path to the SSL certificate file. The SSL certificate contains the public key of the server, which is used to encrypt the data sent from the server to the client. It also includes information about the domain name and other details of the certificate.

    NGINX uses this certificate to authenticate itself to the client and prove that it is the legitimate server for the requested domain. When a client connects to the NGINX server over HTTPS, it checks the validity and authenticity of the certificate presented by the server.

  2. ssl_certificate_key: This directive specifies the path to the private key file that corresponds to the SSL certificate. The private key is used for decrypting the encrypted data received from the client. It ensures that only the server can decrypt the data encrypted with its corresponding public key.

2.4 How to make SSL certificates?

We need to make our own self-signed SSL certificate. To do so, we need to install OpenSSL on our VM:

apt install openssl

then, we should generate a private key. Navigate to a directory where we want to generate the SSL certificate files and :

openssl genrsa -out privkey.pem 2048

This command generates a 2048-bit private key and saves it in the privkey.pem file.

then, generate a certificate signing request (CSR) by running the following command to generate a CSR:

openssl req -new -key privkey.pem -out csr.pem This command prompts us to provide information such as our organization details and domain name. Fill in the required information to generate the CSR. Generate a self-signed certificate: Run the following command to generate a self-signed certificate using the private key and CSR:
openssl x509 -req -in csr.pem -signkey privkey.pem -out cert.pem This command generates a self-signed certificate and saves it in the cert.pem file. Once we have the privkey.pem and cert.pem files, we can use them in our NGINX configuration file for the ssl_certificate_key and ssl_certificate directives respectively.

3. MariaDB

In the project, MariaDB is used as the database for the WordPress application. It stores the website's data, including posts, comments, user information, and other relevant data.

3.1 Dockerfile
FROM debian:buster

RUN apt-get update && apt-get install -y mariadb-server

COPY conf/my.cnf /etc/mysql/my.cnf
COPY conf/init.sql /tmp/init.sql

EXPOSE 3306

CMD ["mysqld", "--user=mysql", "--init-file=/tmp/init.sql"]

The Dockerfile is used to build a Docker image for running a MariaDB database server. FROM debian:buster specifies the base image to use, which is Debian Buster in this case. RUN apt-get update && apt-get install -y mariadb-server installs the MariaDB server package. COPY conf/my.cnf /etc/mysql/my.cnf copies the my.conf file from the local conf directory to the /etc/mysql/ directory inside the container. This file contains configuration settings for MariaDB. COPY conf/init.sql /tmp/init.sql copies the init.sql file from the local conf directory to the /tmp/ directory inside the container. This file contains SQL statements to initialize the database. EXPOSE 3306 specifies that port 3306 should be exposed to allow connections to the MariaDB server. CMD ["mysqld", "--user=mysql", "--init-file=/tmp/init.sql"] sets the command to run when the container starts. It starts the mysqld daemon with the specified user and uses the init.sql file to initialize the database.

3.2 Conf file
[client-server]
socket=/var/lib/mysql/mysql.sock
port=3306

[mysqld]
bind-address=0.0.0.0
skip-networking=false
datadir=/var/lib/mysql

[mariadb]
log_warnings=4
log_error=/var/log/mysql/mariadb.err

[client-server]: This section specifies configuration options related to the client and server communication.

socket=/var/lib/mysql/mysql.sock: Specifies the path to the Unix socket file used for local client connections. port=3306: Sets the port number on which the MySQL server listens for incoming client connections. [mysqld]: This section contains configuration options for the MySQL server itself.

bind-address=0.0.0.0: Configures the server to listen on all available network interfaces, allowing remote connections. skip-networking=false: Enables networking support, allowing the server to accept TCP/IP connections. datadir=/var/lib/mysql: Specifies the directory where MySQL stores its data files. [mariadb]: This section includes MariaDB-specific configuration options.

log_warnings=4: Sets the level of verbosity for logging warnings. log_error=/var/log/mysql/mariadb.err: Specifies the file path for logging MySQL server errors.

3.3 Making a wordpress database(init.sql)
USE mysql;
CREATE DATABASE IF NOT EXISTS wordpress;
CREATE USER IF NOT EXISTS 'wordpress'@'%' IDENTIFIED BY 'secret';
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'%';
FLUSH PRIVILEGES;
ALTER USER 'root'@'localhost' IDENTIFIED BY 'wordpress';

These SQL commands are executed when the MariaDB server starts. They create a database called wordpress, create a user named 'wordpress' with the password 'secret', grant all privileges on the wordpress database to that user, flush the privileges to apply the changes, and finally, change the password of the 'root' user to 'wordpress'.

Init.sql configurations and SQL statements collectively set up the MySQL server with the desired settings and initialize it with a wordpress database and user, ensuring necessary privileges are granted.

Later note: Attention! Here I have forgotten to take the credentials out. In the project we much introduce password with env variables and put them in the .env file. I have done this in the Wordpress setup later in this note.

4. Wordpress

The role of the WordPress + php-fpm container in this project is to host and serve the WordPress website, executing PHP code and connecting to the MariaDB container for database operations.

Good to Know

Local host

Local Host

Remember when you reach a website on a computer, you use its DNS? As you know, the DNS is mapped to a machine's IP address on the internet. When you wanna reach a host, it will bring you to the certain IP. "host" usually refers to a remote computer or server that is hosting a particular service or website. For example, when you access a website on the internet, you are connecting to a host (server) that is hosting that website.

alt text

alt text

Now, when we talk about local host, we talk about an IP address, that brings us to our own computer. Normally, when you use an IP address, your computer will contact with another computer on the internet network:

alt text

but in the local host, you use the loopback addresses your computer is talking to itself:

alt text

Localhost refers to the loopback network interface of a device, commonly represented by the IP address 127.0.0.1. It is used to access the network services that are running on the same device or computer. In simpler terms, localhost is a way to refer to your own computer or device itself.

When you access localhost, you are connecting to the network services hosted on your own machine. For example, if you run a web server on your computer, accessing http://localhost in a web browser would display the web pages hosted by the local server.

The term "localhost" is often used in web development, software testing, and network troubleshooting. It allows developers to test and debug applications locally before deploying them to a remote server. By using localhost, developers can simulate the production environment on their own machine and ensure that everything works as intended.

In addition to 127.0.0.1, the hostname "localhost" can also be resolved to the loopback address "::1" in IPv6-enabled systems. Both IPv4 and IPv6 protocols support the concept of localhost, allowing network services to listen on the loopback interface for local access.

127.0.0.1 to 127.255.255.255   localhost
::1          localhost

docker run -p 8000:80 nginx

When you run the docker run -d -p 8000:80 nginx command to create and start a Docker container based on the official Nginx image, several things happen:

  1. Docker pulls the Nginx image: If you don't already have the Nginx image locally, Docker will download it from the Docker Hub repository. However, since you mentioned that you don't want to use Docker Hub, you need to have the Nginx image available locally. Make sure you have the Nginx image present on your system before running the command.

  2. Docker creates a container: Docker creates a new container based on the Nginx image. This container is an isolated and lightweight runtime environment that encapsulates the Nginx web server and its dependencies.

  3. Port mapping: The -p 8000:80 option maps port 8000 on your local host machine to port 80 inside the Nginx container. Port 80 is the default port on which Nginx listens for incoming HTTP requests. By mapping port 8000 on your local machine to port 80 in the container, you are instructing Docker to forward HTTP traffic from your host machine to the Nginx container.

  4. Container execution: Docker starts the container in the background (-d flag), and the Nginx web server starts running inside the container.

Now, when you visit localhost:8000 in your web browser, the following happens:

  1. Your web browser sends an HTTP request to your local machine on port 8000.

  2. The Docker daemon on your machine receives the request on port 8000 and forwards it to the Nginx container running on port 80.

  3. Nginx inside the container receives the request and generates an HTTP response.

  4. The Docker daemon sends the response back to your web browser, which renders it for display.

By default, the Nginx welcome page is configured to be served when you access the web server without specifying a specific file or location. This is why you see the Nginx welcome message when you visit localhost:8000. The welcome page typically displays the Nginx logo and a "Welcome to nginx!" message.

In summary, the docker run command creates and starts a Docker container based on the Nginx image, and the port mapping (-p) allows you to access the Nginx web server running inside the container on your local machine. By visiting localhost:8000 in your web browser, you can see the default Nginx welcome page served by the Nginx container.

alt text

MariaDB

MariaDB is an open-source relational database management system (RDBMS) that is a fork of MySQL. It was developed as a drop-in replacement for MySQL, designed to maintain compatibility with MySQL and provide additional features and improvements.

The "wordpress" database is commonly associated with the WordPress content management system (CMS). WordPress uses a database to store its content, including posts, pages, comments, user information, and other settings. When you install WordPress, you typically configure it to use a database, and the database is where all the data related to your WordPress site is stored.

By creating the "wordpress" database in your initialization script, you are setting up the container to be ready for hosting a WordPress site. The database will be used by WordPress to store its data once you install and configure WordPress.

PHP FPM

PHP (Hypertext Preprocessor) is a server-side scripting language primarily used for web development. It is a popular programming language for creating dynamic web pages and applications. PHP code is embedded within HTML code and is executed on the server before the resulting HTML is sent to the client's browser.

PHP-FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation that improves the performance and scalability of PHP-based applications. It works by managing a pool of PHP worker processes, which can handle multiple requests simultaneously, leading to better resource utilization and reduced response times. PHP-FPM is commonly used in conjunction with web servers like Nginx or Apache to handle PHP requests efficiently, especially in high-traffic environments.

alt text

alt text

Web server vs Application server vs Database Server

Web Server A web server is a software that handles requests from web browsers and delivers web content to users over the internet. It serves static files, such as HTML, CSS, and images, to the client's web browser. Popular web servers include Apache HTTP Server and Nginx.

Application Server An application server is a software framework that provides an environment for running web applications. It handles the execution of application logic and business processes. Application servers are responsible for dynamic content generation and interaction with databases. Examples of application servers include Apache Tomcat and JBoss.

Database A database is a structured collection of data that is organized and stored for easy access, retrieval, and manipulation. It is used to store and manage structured data for applications. Examples of databases include MySQL, PostgreSQL, and Oracle Database.

Relation The web server, application server, and database work together to serve dynamic web applications. When a user makes a request to a web application through their web browser, the web server receives the request. It can handle static content by itself, but for dynamic content, it passes the request to the application server.

The application server executes the application's logic, retrieves or updates data from the database, and generates a dynamic response. It communicates with the database to perform operations like reading, writing, or modifying data.

The database stores and manages the application's data. It provides a structured storage mechanism, and the application server interacts with it to perform data operations requested by the user.

In summary, the web server handles the initial request, the application server executes the application's logic and communicates with the database, and the database stores and manages the data needed by the application. Together, they form the infrastructure for serving dynamic web applications.

About


Languages

Language:Dockerfile 45.0%Language:Makefile 27.6%Language:Shell 27.3%