How to Implement Elastic Search With Laravel and Docker (1)

Benjamin Ayangbola
4 min readJul 29, 2019

In this tutorial, you will learn how to deploy Elastic Search in a Docker container and integrate with your Laravel application. I assume that you already have a Laravel 5.8 application setup. I use Ubuntu 18.04 and would be working a lot with bash. But before we run any terminal command, let’s take a look at what a Docker container is.

Photo by Jeffrey Blum on Unsplash

Docker Containers

According to Docker website:

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

The purpose of a container is to separate an application and all of its dependencies into a self-contained unit that can be run or deployed anywhere. You might want to think of containers as pseudonym for VMs (Virtual Machines), but they are different not just in architecture but performance. When you’re done with this tutorial, you might want to read-up A Beginner-Friendly Introduction to Containers, VMs and Docker to get a detailed comparison. Now, let’s install Docker.

Installing Docker and Elastic Search

Open Linux terminal using CTRL + ALT + T. When terminal is open, run these commands to install Docker:

sudo apt update
sudo apt install docker.io

After Docker is installed, it’s time to download and install a Docker image for Elastic Search. Think of this image as “an unborn container” that already has the configurations and instructions for a complete version of Elastic Search. The command below downloads a Docker image for Elastic Search version 7.2.0:

docker pull docker.elastic.co/elasticsearch/elasticsearch:7.2.0

After downloading the image, run the following command to create an instance of that image as a container:

sudo docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.2.0

The command runs the Elastic Search Docker container in detached mode. That’s what the -d flag is for — “don’t block my terminal after installation”. When the container is created, then an Elastic Search application becomes accessible on port 9200 of your host. But first, confirm that the Elastic Search container is running. Type sudo docker ps in terminal and hit ENTER key. You should see:

CONTAINER ID  |  IMAGE NAMES  |  COMMAND  |  CREATED  |  STATUS |  PORTS
22d722ed2d0d docker.elastic.co/elasticsearch:7.2.0 "/usr/local/bin/dock.." 43 seconds ago Up 2 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp

Your CONTAINER ID will be different. Now we have an Elastic Search container up and running.

Accessing the Elastic Search Application

Photo by Nick Fewings on Unsplash

Remember that Docker containerizes our Elastic Search application. This application is then accessible via port 9200 of your server IP address. Since we’re running a local server, open your browser and access http://localhost:9200. The response you’d get should be similar to this:

{
name: "22d722ed2d0d",
cluster_name: "docker-cluster",
cluster_uuid: "8-VfFF9oTsC74pFFJtDpRQ",
version: {
number: "7.2.0",
build_flavor: "default",
build_type: "docker",
build_hash: "508c38a",
build_date: "2019-06-20T15:54:18.811730Z",
build_snapshot: false,
lucene_version: "8.0.0",
minimum_wire_compatibility_version: "6.8.0",
minimum_index_compatibility_version: "6.0.0-beta1"
},
tagline: "You Know, for Search"
}

If you get a blank screen or a browser prompt that the address does not exist, wait for 20 seconds, then access http://localhost:9200 again. Sometimes, it takes few seconds for the application to become available on the specified port even after your Docker container has started running. If after doing this, you still don’t get the JSON response above, then there’s a problem. You may have to go over the process of setting up a Docker container again.

Elastic Search Indexing

What’s a very big library without books? We have a containerized Elastic Search application setup, but what’s in there to search? It’s time to feed our Elastic Search container with structured data. Elastic Search has an index API that creates, adds, updates and deletes indexes. Think of an index as a database — although it’s, in its truest sense, not a database. But that’s a talk for another day.

What Are We Indexing?

For this tutorial, I’ll adopt Dale Scott’s adaptation of Microsoft’s Northwind sample database. Let’s proceed by creating a database called northwind using your favourite database management tool. From terminal, it’s as easy as logging in to MySQL and running CREATE DATABASE northwind. You may use PHPMyAdmin, but I personally prefer using terminal or MySQL Workbench. After the database is created, copy the SQL file that creates Northwind tables from Scott’s GitHub repo, paste it in a file and save as create_tables.sql. This will add tables to our database. In your terminal, cd to the directory where you have create_tables.sql, then run the following command:

mysql -u root -p -D northwind -e create_tables.sql

If your local database does not have a password (not recommended), you may remove the -p flag before running the command.

Now we have tables in our database. Let’s import sample data. Copy this Northwind SQL data file also from Scott’s GitHub repo and save it in a file as import_data.sql. cd to the directory where the file is saved and run the command below:

mysql -u root -p -D northwind -e import_data.sql

After the import is done, we’ll have a Northwind database ready to be indexed.

In the next tutorial, we’ll connect our Laravel application to the Elastic Search container we created. We’ll setup a model for customers table and create a controller that converts the SQL data in the table to an Elastic Search index.

--

--