• Flask + Angular Full-Stack Tutorial Part 4 - PostgreSQL Part 1 (5-24-16)

    Contents:

    1. Intro
    2. Why is a Database Necessary?
    3. Text File Database
    4. SQL Basics
    5. Installing PostgreSQL
    6. Creating a Database
    7. Principle of Least Privilege
    8. Using PSQL
    9. Psycopg2
    10. Displaying the Results
    11. Wrapping Up

    Intro

    In Part 3 of this tutorial series, we looked at how to write a basic Flask app, we wrote modular html code by utilizing jinja templates, and we added some basic style with bootstrap and font-awesome. In this part, we are going to learn how to create a PostgreSQL database that will interface with our server and html code.

    postgreSQL website

    Why is a Database Necessary?

    A database is necessary for our app because we would like to store data based on user activity in a centralized location and later load that data when we need it. We would also like to store the data in a way that is well-organized and easy to import/export. The data generated by user-input (such as a user email or pet name) would need to be optionally retrieved separately from other related data (such as by utilizing key->value pairs). We would also like the data generated by users to be available to other users in certain quantities. These are all reasons (and many more, depending on the type of app) to implement a database.

    Text file database

    Note: Only read this section if you are curious as to why a text file database implemented from scratch would be impractical for a web app. Otherwise, skip down to the SQL Basics section, or if you know SQL basics, to the Installing PostgreSQL section.

    If we wanted to, we could use a single text file as our database. A text file can be a centralized source of data that can be read and written to, and can contain any data organization scheme we choose to write to it. In our text file database, we have a completely blank slate in terms of what we want our data organization scheme. For our users and pets, let’s type some examples of how we think our users could be represented in a database:

    
        type: user
        name: Billy
        joined-on: 12-12-12
        email: billy@billy.com
    
        type: user
        name: Sally
        joined-on: 01-06-16
        email: sally@sally.com
    
        type: pet
        name: Doggy
        added-on: 11-04-15
        breed: poodle
    
        type: pet
        name: Kitty
        added-on: 05-04-13
        breed: Burmese
      

    That’s a good start. Let’s say we have an html form with four fields, and depending on whether we are adding a user or pet the relevant fields will display. When we submit the form, we can write code that generates a string for each field as a key->value pair, write those strings to separate lines, and include a blank line in-between objects. And to retrieve the data, we will write code that will read the database line-by-line, performing conditional statements along the way to check if the data we are targeting exists. For example, our algorithm for loading a user from the data we have could be:

    • step 1: read current line.
    • step 2: if the beginning of the line starts with "type:", interpret the current and next lines until an empty line as a user object; otherwise, keep searching for a line that begins with "type:".
    • step: 3 load each key value pair for the user until an empty line is reached.

    Seems good enough. But then one day a sneaky user inputs a line of code into the fields:

    
        <p style="margin:0">Name: </p>
        <input type="text" value="\n\n\n" style="color:black">
        <p style="margin:0">Email: </p>
        <input type="text" value="sneaky_user@sneak.com" style="color:black">
      

    Depending on how the code is being written into the text file and the amount of error checking we have, this might cause a resulting database entry:

    
        type: user
        name:
    
    
    
        joined-on: 04-03-16
        email: sneaky_user@sneak.com
    
        type: user
        name: Billy
        joined-on: 12-12-12
        email: billy@billy.com
    
        type: user
        name: Sally
        joined-on: 01-06-16
        email: sally@sally.com
    
        type: pet
        name: Doggy
        added-on: 11-04-15
        breed: poodle
    
        type: pet
        name: Kitty
        added-on: 05-04-13
        breed: Burmese
      

    And since our current algorithm stops interpreting a user object when the first empty line is encountered after reading “type:”, our algorithm will skip over the joined-on and email keys for the sneaky_user. We can’t have that happen, our data could get corrupted! So let’s enclose the keys and values in double quotes when we store things. After fixing the damage caused by sneaky_user, our database now looks like:

    
        "type": "user"
        "name": "\n\n\n"
        "joined-on": "04-03-16"
        "email": "sneaky_user@sneak.com"
    
        "type": "user"
        "name": "Billy"
        "joined-on": "12-12-12"
        "email": "billy@billy.com"
    
        "type": "user"
        "name": "Sally"
        "joined-on": "01-06-16"
        "email": "sally@sally.com"
    
        "type": "pet"
        "name": "Doggy"
        "added-on": "11-04-15"
        "breed": "poodle"
    
        "type": "pet"
        "name": "Kitty"
        "added-on": "05-04-13"
        "breed": "Burmese"
      

    We have changed our form so that the data gets stored inside double quotes and that newline characters do not create newlines. We also changed our algorithm to treat everything inside double quotes as either a key or a value paired by an inner colon, ignoring anything outside double quotes that is not a newline. This is a big step up from what we just had. But then sneaky_user returns, and he submits another harmful line of code in our form:

    
        <p style="margin:0">Name: </p>
        <input value="&quot" style="color:black">
        <p style="margin:0">Email: </p>
        <input type="text" value="sneaky_user@sneak.com" style="color:black">
      

    Note that instead of the html code for a double quote, the sneaky user would actually type “ (due to security reasons, the input tag will not let you set its default value to be a double quote). So then when the data is submitted into the database, something like this might happen:

    
        "type": "user"
        "name": """
        "joined-on": "04-03-16"
        "email": "sneaky_user@sneak.com"
    
        "type": "user"
        "name": "\n\n\n"
        "joined-on": "04-03-16"
        "email": "sneaky_user@sneak.com"
    
        "type": "user"
        "name": "Billy"
        "joined-on": "12-12-12"
        "email": "billy@billy.com"
    
        "type": "user"
        "name": "Sally"
        "joined-on": "01-06-16"
        "email": "sally@sally.com"
    
        "type": "pet"
        "name": "Doggy"
        "added-on": "11-04-15"
        "breed": "poodle"
    
        "type": "pet"
        "name": "Kitty"
        "added-on": "05-04-13"
        "breed": "Burmese"
      

    Since sneaky_user entered one double-quote, that throws off the double quotes encapsulation for the rest of the database! Now instead of the keys and values being encased in double quotes, the colons and empty lines are. This breaks our algorithm again, and we would have to go back to the drawing board to somehow improve our database design and method of storing data from the html form. We could keep on going for a long time figuring out algorithms that determine how the file should be written to and read from, but the code would need some serious error-checking, input sanitization, and try-catch recovery blocks written from scratch to mitigate any bad data that might get input.

    Thankfully, there are multiple popular database systems in use that take these issues and many more into account. They also provide a language of commands for creating, reading, updating, and deleting data, which abstracts away certain steps in using the database like the algorithm we were using to parse our text file data. The database system we will be using is PostgreSQL.

    SQL Basics

    SQL (Structured Query Language) is a standard language used for communicating with relational databases. Although there are many different extensions of regular SQL (like MySQL, Oracle, and PostgreSQL), each share a core set of commands that are common to all SQL extensions (althougth certain syntax variations exist between extensions). I will be writing example code as it relates to PostgreSQL and its particular syntax. I strongly recommend that if you have never used SQL before that you do some basic tutorials.

    In SQL, a single app’s data as a whole is usually stored in a single database (there are exceptions), and within that database there are multiple tables that store data about particular objects (like a user or a pet). Within each table are multiple fields (or columns) that represent attributes of an object (like name, email, id, timestamp of day created, etc.). Each instance of an object in the table is called a record (or row). Here is an example command that creates a database:

    
      CREATE DATABASE exampledb;
      

    Since SQL was designed to be mostly (definitely not always) human-readable, that command is pretty self-explanatory: it CREATEs a DATABASE named exampledb. A similar command is used to make a table:

    
      CREATE TABLE exampleusers (
        user_id BIGSERIAL PRIMARY KEY,
        username VARCHAR(32) NOT NULL,
        password VARCHAR(64) NOT NULL
      );
      

    The code inside the outer parentheses define the columns that are going to be in the new table. The basic syntax of defining a column is: name type options. So looking at the above table, we are defining a column named “user_id” that will be of type BIGSERIAL (which is a PostgreSQL type that lets us serialize all users in the database and allows us to give each user a unique identifier) and will be the PRIMARY KEY for this table (more on that later). We define another column named “username” that is of type VARCHAR (which stands for CHARacter VARying) with a maximum size of 32 characters long and can NOT be NULL. Similar deal for the password column.

    Once we have a table in a database, we can begin performing commands that will alter its data. This command will insert a user into the exampleusers table:

    
      INSERT INTO exampleusers (username, password) VALUES ("example", "expass");
      

    The INSERT command needs to know the table it is going to try to insert data into, as well as the fields (which are listed in parentheses to the left of the VALUES keyword) it will insert the values into (which are listed in parentheses to the right of the VALUE keyword). Notice that we omitted the user_id field; this is because user_id is of the type BIGSEQUENCE, which means that user_id will automatically be populated by a number that BIGSEQUENCE generates. But we could have also done this valid insert statement:

    
      INSERT INTO exampleusers (username) VALUES ("example");
      

    We do not have to insert data into all of an object’s columns at once, we just have to specify which fields we want to target. We can also insert multiple objects using a single insert statement:

    
      INSERT INTO exampleuser (username, password) VALUES ("example", "expass"), ("example2", "expass2");
      

    To read/request/get data from a database, a SELECT command is used. The following select command will return all the usernames and user_ids in table exampleusers:

    
      SELECT user_id, username FROM exampleusers;
      

    The SELECT command needs to know the columns we want returned FROM the table we specify. The code in the previous example would return the user_ids and usernames of all existing users in the table. We could also write another valid select statement this way:

    
      SELECT * FROM exampleusers;
      

    The asterisk indicates that we want to return all of the columns from the table we specify, which means that user_id, username, and password of all existing users would be returned in the previous example. Select commands may also contain conditional WHERE clauses that include/exclude records in the table:

    
      SELECT * FROM exampleusers WHERE username = 'example';
      

    This code would only return users whose username equals ‘example’ in the table. WHERE clauses can also be chained using the logical AND, OR keywords:

    
      SELECT * FROM exampleusers WHERE username = 'example' OR user_id = 2;
      

    This code would return all users whose username equals ‘example’ or whose user_id = 2.

    To update a particular record in the table, we use the UPDATE command:

    
      UPDATE exampleusers SET username = 'anothername' WHERE username = 'example';
      

    This code would UPDATE records in exampleusers by SETting specific fields to equal a new value WHERE a particular condition exists. If we omitted the WHERE clause in the UPDATE statement, all of the usernames in the table would be updated, which is not something we want to happen very often (if at all).

    To delete a user, we use the DELETE command:

    
      DELETE FROM exampleusers WHERE username = 'example';
      

    This code would DELETE any users in exampleusers whose name equals example. There are many more useful SQL commands and clauses, but these are what we need to get started.

    Installing PostgreSQL

    On most linux distributions (and cloud9), PostgreSQL comes pre-installed by default. On Debian and Ubuntu systems, you can use apt-get to install/upgrade/remove your system’s postgresql installation:

    
      apt-get install postgresql-9.4
      

    Next, in order to access psql, which is PostgreSQL’s interactive mode in the terminal, we will need to start the PostgreSQL server. In the terminal, type:

    
      sudo service postgresql start
      

    If you have PostgreSQL installed correctly and if the server was able to start, you’ll see a resulting message in the terminal similar to this:

    
       * Starting PostgreSQL 9.3 database server
       ...done.
      

    Now we need to create a password for psql so that only authorized users can access it. In the terminal, type:

    
      sudo sudo -u postgres psql
      

    This will open psql. Then type the following to set the password for user “postgres”:

    
      \password postgres
      

    And exit out of psql:

    
      \q
      

    Now that psql is password-protected, we can log in. In the terminal, type:

    
      psql -U postgres -h localhost
      

    After providing the password we set, this will enter us into psql as user postgres. Now let’s exit back to the normal terminal by entering:

    
      \q
      

    Creating a Database

    In our usage of postgresql, we can create and modify on our databases in three general ways: through the psql command prompt, through SQL files, and through code executed in python files. To create our database and tables, we are going to use SQL files that we can import into psql. We could create everything at the psql command prompt, but if we wanted to delete the database in order to update its structure we would have to retype a lot of code. Putting our CREATE commands in an SQL file lets us modify the parts of the code we want to without retyping everything since psql can read the commands in the SQL file when it is imported.

    Change into the pet-app/static directory, and create a directory inside that will hold our SQL code:

    
      mkdir sql
      

    Change into the new sql directory and create a file with the following contents, saving it as “petapp.sql”:

    
        DROP DATABASE IF EXISTS petapp;
        CREATE DATABASE petapp;
    
        \c petapp;
    
        DROP OWNED BY petapp_admin;
        DROP ROLE IF EXISTS petapp_admin;
        CREATE ROLE petapp_admin LOGIN PASSWORD 'woofwoof';
    
        CREATE TABLE users (
            user_id BIGSERIAL PRIMARY KEY,
            user_name VARCHAR(32) NOT NULL,
            user_password VARCHAR(64) NOT NULL
        );
    
        CREATE TABLE pets (
            pet_id BIGSERIAL PRIMARY KEY,
            pet_name VARCHAR(32) NOT NULL,
            pet_breed VARCHAR(32) NOT NULL
        );
    
        GRANT SELECT, INSERT, UPDATE, DELETE ON users, pets TO petapp_admin;
        GRANT SELECT, UPDATE ON users_user_id_seq, pets_pet_id_seq TO petapp_admin;
      

    Lets go over each section in the SQL code. “DROP DATABASE IF EXISTS petapp” checks the list of databases in psql and deletes the “petapp” database if it exists. We are doing this in order to completely reset the database each time we import it into psql. The next line creates the petapp database. “\c petapp” uses the “change” command that tells psql to change into the database petapp. We have this command immediately after we create the database since we want the subsequent sql commands to apply to petapp.

    Principle of Least Privilege

    The next three lines deal with commands related to a role. A role in postgresql is a database administrative user who has a specific set of permissions relating to database management and data modification. Roles are important since they allow the access of information in the database to be restricted depending on which permissions have been granted. For example, an “author” role could be created that has permissions to INSERT and UPDATE posts, but not SELECT and DELETE them. A “researcher” role could be created that only has the permssion to SELECT from one table. This is where the Principle of Least Privilege (PoLP) comes into play. PoLP means giving people who will access the database the least amount of privileges they need to perform their duties, and is important because PoLP enhances the security of the app by minimizing the amount of damage able to be caused by database security breaches.

    Imagine if the only person accessing our database via the commands executed in our python code has complete administrative privileges to do whatever they wanted to. One day, a cyber-terrorist successfully gains access to the database by either guessing the correct authorization credentials or by discovering a SQL exploit in our app. Either way, if the role we have set to access our database has no restrictions, the likelihood an attacker could successfully perform a command like this is great:

    
        DROP DATABASE petapp;
      

    This would completely erase our production database! But if we set our administrator’s privileges to only INSERT, SELECT, UPDATE, and DELETE operations on our tables, then the worst an attacker could do is:

    
        DELETE * FROM users;
        DELETE * FROM pets;
      

    Which is still pretty disastrous (this will remove all records in our users and pets tables), but not as disastrous as destroying the tables and database itself. Privileges can be refined in this way to provide a good trade off between security concerns and access to the database.

    Getting back to the code in the SQL file, “DROP OWNED BY petapp_admin” will delete any tables that are owned by the role petapp_admin (roles can “own” tables by issuing the GRANT command). “DROP ROLE IF EXISTS petapp_admin” will delete the role petapp_admin if it exists. “CREATE ROLE petapp_admin LOGIN PASSWORD ‘woofwoof’” creates a new role called petapp_admin with a login password that equals ‘woofwoof’. The bottom two lines of the SQL file grant the specific permissions we want to issue to our role. “GRANT SELECT, INSERT, UPDATE, DELETE ON users, pets, TO petapp_admin” specifies that we want certain commands able to executed on certain tables by a certain role. Same deal with the line below that, except that line is dealing with granting permissions to sequences (which are generated by postgresql when a field with SERIAL or BIGSERIAL is created in a table). Sequences are different than tables, so the applicable permissions able to be granted on sequences are also different. The code in between the CREATE ROLE and GRANT commands creates our tables, which we have already learned about in the previous section.

    Using PSQL

    Now that we have our SQL code ready, we can import it into psql. The easiest way to import an sql file is to first change into the directory where the sql file resides. Change into the sql directory, and log into psql by typing in the terminal:

    
      psql -U postgres -h localhost
      

    Once you have logged into psql, type and press enter:

    
      \i petapp.sql
      

    The “\i” command will import SQL from the file specified. You should see something similar to this appear after you execute the import command:

    
        postgres=# \i petapp.sql 
        DROP DATABASE
        CREATE DATABASE
        You are now connected to database "petapp" as user "postgres".
        DROP OWNED
        DROP ROLE
        CREATE ROLE
        CREATE TABLE
        CREATE TABLE
        GRANT
        GRANT
        petapp=# 
      

    Notice that since we put the “\c” (change) command in our SQL file, the psql command prompt changes from postgres=# to petapp=#. To check if our database was really imported, we can type the “list” command and push enter:

    
      \l
      

    This command will display a list of databases similar to this:

    
                                       List of databases
           Name    |  Owner   | Encoding  | Collate | Ctype |   Access privileges   
        -----------+----------+-----------+---------+-------+-----------------------
         petapp    | postgres | SQL_ASCII | C       | C     | 
         postgres  | postgres | SQL_ASCII | C       | C     | 
         template0 | postgres | SQL_ASCII | C       | C     | =c/postgres          +
                   |          |           |         |       | postgres=CTc/postgres
         template1 | postgres | SQL_ASCII | C       | C     | =c/postgres          +
                   |          |           |         |       | postgres=CTc/postgres
         ubuntu    | ubuntu   | SQL_ASCII | C       | C     | 
        (5 rows)
      

    Awesome, petapp was successfully imported! Now we can look at the contents of petapp by typing the “describe” command:

    
      \d
      

    This command will display a list of all the relations (tables and sequences) in our current database:

    
                          List of relations
         Schema |       Name        |   Type   |  Owner   
        --------+-------------------+----------+----------
         public | pets              | table    | postgres
         public | pets_pet_id_seq   | sequence | postgres
         public | users             | table    | postgres
         public | users_user_id_seq | sequence | postgres
        (4 rows)
      

    This displays the two tables we explicitly created in the SQL file, pets and users, as well as the sequences that postgresql automatically generated for each table’s BIGSERIAL id type. To display the contents of an individual table, type the “describe” command with a relation name as its argument:

    
      \d pets
      

    The structure of table pets will be displayed:

    
                                              Table "public.pets"
          Column   |         Type          |                       Modifiers                       
        -----------+-----------------------+-------------------------------------------------------
         pet_id    | bigint                | not null default nextval('pets_pet_id_seq'::regclass)
         pet_name  | character varying(32) | not null
         pet_breed | character varying(32) | not null
        Indexes:
            "pets_pkey" PRIMARY KEY, btree (pet_id)
      

    We can also perform queries on our database using the psql command prompt. Type:

    
      INSERT INTO pets (pet_name, pet_breed) VALUES ('fishy', 'minnow');
      

    If the INSERT operation was successful, psql should respond by displaying “INSERT 0 1”. Now we can retrieve a list of pets in our database with a SELECT command:

    
      SELECT * FROM pets;
      

    This should result in psql displaying the data that matched the SELECT criteria (which is all pets in this case):

    
         pet_id | pet_name | pet_breed 
        --------+----------+-----------
              1 | fishy    | minnow
        (1 row)
      

    Let’s go ahead and delete fishy:

    
      DELETE FROM pets WHERE pet_name = 'fishy';
      

    And we can verify that fishy was indeed deleted by executing the same SELECT statement again, which will show that no pets now exist in the database:

    
         pet_id | pet_name | pet_breed 
        --------+----------+-----------
        (0 rows)
      

    Sweeeeet. Now that we know how to work psql and issue basic SQL commands, we can configure our app to read from and write to our database. Exit out of psql by issuing the “\q” command.

    Psycopg2

    So far we have been using postgresql exclusively inside the psql interactive mode, but now we are going to write code that will integrate postgresql into python. To do this, we need to install a postgresql adapter for python called Psycopg. In the terminal, activate your pet-app virtualenv (by typing “workon pet-app”) and type:

    
      pip install psycopg2
      

    This will install the 2.x branch of psycopg into the pet-app virtualenv. In our server file, we need to import psycopg2 and add a function that will connect us to the database using the credentials we created for our petapp_admin. In server.py update the imports section by importing psycopg2 and psycopg2.extras:

    
      import psycopg2
      import psycopg2.extras
      

    Next, add this function before the code for the routes:

    
      def connectToDb():
        connectionString = 'dbname=petapp user=petapp_admin password=woofwoof host=localhost'
        print (connectionString)
        return psycopg2.connect(connectionString)
      

    connectToDb() is a function that will return a connection object if the credentials we pass psycopg2.connect() as a parameter are valid. In the connection string, we are specifying the database name, the user (here we are specifying the role we created), the user password, and the host (which we specify as localhost, since we are currently using the development server). Each python function that we want to perform SQL queries in needs to first connect to the database using this function. Let’s update our usersRoute() function so that when we visit the ‘/users’ route, a list of users in the database will be fetched:

    
        @app.route('/users')
        def usersRoute():
            conn = connectToDb()
            cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
            cur.execute("SELECT * FROM users")
            results = cur.fetchall()
            print (results)
            return render_template("users.html", users=results)
      

    Here is what is happening line by line. “conn = connectToDb()” is saving the result of the attempt to connect to the database in a variable named “conn”. “cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)” defines our cursor variable (“cur”) which will allow python to execute postgresql commands. Note that this line depends on the “conn” variable we defined in the previous line, so if the connection to the database fails our cursor object will not work. “cur.execute(“SELECT * FROM users”)” is using the cur object to execute a postgresql command. “results = cur.fetchall()” lets us save the returned values from the SELECT statement in a variable, which we are calling “results”. If we only wanted a single result, we could have used “cur.fetchone()”. “print (results)” will log the results to the terminal, which can be very useful for troubleshooting and determining the form of data we are handling. Finally, in the return statement we are passing the results of the SELECT command (saved in the variable “results”) to the jinja templating system and assigning it to another variable named “users”.

    Displaying the Results

    Now that we have a method of calling the database from python, we can display the results from select statements in our Jinja templates. In users.html, we want to iterate over all of the users returned from the select statement, displaying each user’s name and id. In jinja, we can use a “for in” clause to loop through our resulting users. Update your users.html file to incorporate the for loop:

    
        {% extends "layout.html" %}
        {% block content %}
          <p>You are now at the users page</p>
          <p>List of users:</p>
          <ul>
            {% for user in users %}
              <li>
                <span>Name: <b>{{user['user_name']}}</b></span>, 
                <span>Id: <b>{{user['user_id']}}</b></span>
              </li>
            {% endfor %}
          </ul>
        {% endblock %}
      

    Instead of having a static list of members inside the ul tag, now the list members are being generated depending on the resulting users being passed from the server.py file.

    “{% for user in users %}” is referencing the name of the variable we passed in the render_template() return value of our usersRoute() function. The “for user in users” part assigns the data for the current iteration of users to the temporary variable “user” (this will make much more sense once we run the code). Inside the li tags, we are referencing the attributes of our current user object. By default, we can reference the values of columns from our database results by using the subscript (“[ ]”) operator with our temporary user variable. So “<span>Name: <b>{{user[‘user_name’]}}</b></span>” prints the current user’s attribute “user_name” inside b html tags.

    In order to see this new code working properly, we need to insert some users into our database. Log into psql, change into the pet-app database, and insert two users:

    
        INSERT INTO users (user_name, user_password) VALUES ('Billy', 'billypass'), ('Sally', 'sallypass');
      

    Now exit out of psql and run the server.py file. If we navigate to the ‘/users’ route, we should see Billy and Sally listed as our current users:

    When we visit the users route, in the server.py usersRoute() function, we used a print statement that logs the results of the select statement to the terminal. After visiting the users route, if we look in the terminal we’ll see the format of the data that is being returned by the cursor object:

    
        [[1, 'Billy', 'billypass'], [2, 'Sally', 'sallypass']]
      

    It is this object representation (a list of lsits) that we based our code decisions on when writing the Jinja code in users.html. If we wanted to, in server.py we could have first cast the results to a string and passed the string to the jinja template; doing that would require us to write jinja code in user.html that would account for the difference in object type being passed. In short, there is a lot of flexibility in terms of what format you can pass your data from the server to the hmtl file, but the code in both files has to reflect the format that the data being passed is in.

    Wrapping Up

    So now we can read data from our database and put that into our html templates! :grin: In order to avoid making this post an absolute mammoth, we will be covering how to use forms, safely insert data into our database, and hash the most important pieces of our data in the next part of this tutorial series. If you made it this far, good job! :clap: Thanks for reading, and stay tuned for part 5!