Blog

  • A Complete Guidebook on Starting Your Own Homelab for Data Analysis

    A Complete Guidebook on Starting Your Own Homelab for Data Analysis

    There has never been a better time to start your data science homelab for analyzing data useful to you, storing important information, or developing your own tech skills.

    Will Keefe Published in Towards Data Science

    There’s an expression I’ve read on Reddit a few times now in varying tech-focused subreddits that is along the lines of “Paying for cloud services is just renting someone else’s computer.” While I do think cloud computing and storage can be extremely useful, this article will focus on some of the reasons why I’ve moved my analyses, data stores, and tools away from the online providers, and into my home office. A link to the tools and hardware I used to do this is available as well.

    Introduction

    The best way to start explaining the method to my madness is by sharing a business problem I ran into. While I’m a fairly traditional investor with a low-risk tolerance, there is a small hope inside of me that maybe, just maybe, I can be one of the <1% to beat the S&P 500. Note I used the word “hope”, and us such, do not put too much on the line in this hope. A few times a year I’ll give my Robinhood account $100 and treat it with as much regard as I treat a lottery ticket — hoping to break it big. I will put the adults in the room at ease though by sharing that this account is separate from my larger accounts that are mostly based on index funds with regular modest returns with a few value stocks I sell covered calls on a rolling basis with. My Robinhood account however is borderline degenerate gambling, and anything goes. I have a few rules for myself though:

    1. I never take out any margin.
    2. I never sell uncovered, only buy to open.
    3. I don’t throw money at chasing losing trades.

    You may wonder where I’m going with this, and I’ll pull back from my tangent by sharing that my “lottery tickets” that have, alas, not earned me a Jeff-Bezos-worthy yacht yet, but have taught me a good bit about risk and loss. These lessons have also inspired the data enthusiast inside of me to try to improve the way I quantify risk and attempt to anticipate market trends and events. Even models directionally correct in the short term can provide tremendous value to investors — retail and hedge alike.

    The first step I saw toward improving my decision-making was to have data available to make data-driven decisions. Removing emotion from investing is a well-known success tip. While historical data is widely available for stocks and ETFs and is open-sourced through resources such as yfinance (an example of mine is below), derivative historical datasets are much more expensive and difficult to come by. Some initial glances at the APIs available provided hints that regular, routine access to data to backtest strategies for my portfolio could cost me hundreds of dollars annually, and possibly even monthly depending on the granularity I was seeking.

    I decided I’d rather invest in myself in this process, and spend $100’s of dollars on my own terms instead. *audience groans*

    Building on the Cloud

    My first thoughts on data scraping and warehousing led me to the same tools I use daily in my work. I created a personal AWS account, and wrote Python scripts to deploy on Lambda to scrape free, live option datasets at predetermined intervals and write the data on my behalf. This was a fully automated system, and near-infinitely scalable because a different scraper would be dynamically spun up for every ticker in my portfolio. Writing the data was more challenging, and I was nestled between two routes. I could either write the data to S3, crawl it with Glue, and analyze it with serverless querying in Athena, or I could use a relational database service and directly write my data from Lambda to the RDS.

    A quick breakdown of AWS tools mentioned:

    Lambda is serverless computing allowing users to execute scripts without much overhead and with a very generous free tier.

    S3, aka simple storage service, is an object storage system with a sizable free tier and extremely cost-effective storage at $0.02 per GB per month.

    Glue is an AWS data prep, integration, and ETL tool with web crawlers available for reading and interpreting tabular data.

    Athena is a serverless query architecture.

    I ended up leaning toward RDS just to have the data easily queryable and monitorable, if for no other reason. They also had a free tier available of 750 hours free as well as 20 GB of storage, giving me a nice sandbox to get my hands dirty in.

    Little did I realize, however, how large stock options data is. I began to write about 100 MB of data per ticker per month at 15-minute intervals, which may not sound like much, but considering I have a portfolio of 20 tickers, before the end of the year I would have used all of the entirety of the free tier. On top of that, the small compute capacity within the free tier was quickly eaten up, and my server ate through all 750 hours before I knew it (considering I wanted to track options trades for roughly 8 hours a day, 5 days a week). I also frequently would read and analyze data after work at my day job, which led to greater usage as well. After about two months I finished the free tier allotment and received my first AWS bill: about $60 a month. Keep in mind, once the free tier ends, you’re paying for every server hour of processing, an amount per GB out of the AWS ecosystem to my local dev machine, and a storage cost in GB/month. I anticipated within a month or two my costs of ownership could increase by at least 50% if not more, and continue so on.

    Yikes.

    Leaving the Cloud

    At this point, I realized how I’d rather be taking that $60 a month I am spending renting equipment from Amazon, and spend it on electric bills and throwing what is left over into my Robinhood account, back where we started. As much as I love using AWS tools, when my employer isn’t footing the bill (and to my coworkers reading this, I promise I’m frugal at work too), I really don’t have much interest in investing in them. AWS just is not priced at the point for hobbyists. They give plenty of great free resources to learn to noobies, and great bang for your buck professionally, but not at this current in-between level.

    I had an old Lenovo Y50–70 laptop from prior to college with a broken screen that I thought I’d repurpose as a home web scraping bot and SQL server. While they still can fetch a decent price new or certified refurbished (likely due to the i7 processor and dedicated graphics card), my broken screen pretty much totaled the value of the computer, and so hooking it up as a server breathed fresh life into it, and about three years of dust out of it. I set it up in the corner of my living room on top of a speaker (next to a gnome) and across from my PlayStation and set it to “always on” to fulfill its new purpose. My girlfriend even said the obnoxious red backlight of the computer keys even “pulled the room together” for what it’s worth.

    Gnome pictured, but at the time photo was taken, the server was not yet configured.

    Conveniently my 65″ Call-of-Duty-playable-certified TV was within HDMI cable distance to the laptop to actually see the code I was writing too.

    I migrated my server from the cloud to my janky laptop and was off to the races! I could now perform all of the analysis I wanted at just the cost of electricity, or around $0.14/kWh, or around $0.20–0.30 a day. For another month or two, I tinkered and tooled around locally. Typically this would look like a few hours a week after work of opening up my MacBook, playing around with ML models with data from my gnome-speaker-server, visualizing data on local Plotly dashboards, and then directing my Robinhood investments.

    I experienced some limited success. I’ll save the details for another Medium post once I have more data and performance metrics to share, but I decided I wanted to expand from a broken laptop to my own micro cloud. This time, not rented, but owned.

    Building the Home Lab

    “Home Lab” is a name that sounds really complicated and cool *pushes up glasses*, but is actually relatively straightforward when deconstructed. Basically, there were a few challenges I was looking to address with my broken laptop setup that provided motivation, as well as new goals and nice-to-haves that provided inspiration.

    Broken laptop problems:

    The hard drive was old, at least 5 or 6 years old, which posed a risk to potential future data loss. It also slowed down significantly under duress with larger queries, a noted problem with the model.

    Having to use my TV and Bluetooth keyboard to use my laptop with Windows 10 Home installed was very inconvenient, and not ergonomically friendly.

    The laptop was not upgradeable in the event I wanted to add more RAM beyond what I had already installed.

    The technology was limited in parallelizing tasks.

    The laptop alone was not strong enough to host my SQL server as well as dashboards and crunching numbers for my ML models. Nor would I feel comfortable sharing the resources on the same computer, shooting the other services in the feet.

    A system I would put into place had to solve each of these problems, but there were also new features I’d like to achieve too.

    Planned New Features:

    A new home office setup to make working from home from time to time more comfortable.

    Ethernet wiring throughout my entire apartment (if I’m paying for the whole gigabit, I’m going to use the whole gigabit AT&T).

    Distributed computing* with microservers where appropriate.

    Servers would be capable of being upgraded and swapped out.

    Varying programs and software deployable to achieve different subgoals independently and without impeding current or parallel programs.

    *Distributed computing with the computers I chose is a debated topic that will be explained later in the article.

    I spent a good amount of time conducting research on appropriate hardware configurations. One of my favorite resources I read was “Project TinyMiniMicro”, which compared the Lenovo ThinkCentre Tiny platform, the HP ProDesk/EliteDesk Mini Platform, and the Dell OptiPlex Micro platform. I too have used single-board computers before like the authors of Project TMM, and have two Raspberry Pis and an Odroid XU4.

    What I liked about my Pis:

    They were small, ate little power, and the new models have 8GB of RAM.

    What I liked about my Odroid XU4:

    It is small, has 8 cores, and is a great emulation platform.

    While I’m sure my SBCs will still find a home in my homelab, remember, I need equipment that handles the services I want to host. I also ended up purchasing probably the most expensive Amazon order of my entire life and completely redid my entire office. My shopping cart included:

    • Multiple Cat6 Ethernet Cables
    • RJ45 Crimp Tool
    • Zip ties
    • 2 EliteDesk 800 G1 i5 Minis (but was sent G2 #Win)
    • 1 EliteDesk 800 G4 i7 Mini (and sent an even better i7 processor #Win)
    • 2 ProDesk 600 G3 i5 Minis (and send sent a slightly worse i5 #Karma)
    • Extra RAM
    • Multiple SSDs
    • A new office desk to replace my credenza/runner
    • New office lighting
    • Hard drive cloning equipment
    • Two 8-Port Network Switches
    • An Uninterruptible Power Supply
    • A Printer
    • A Mechanical Keyboard (Related, I also have five keyboard and mice combos from the computers if anyone wants one)
    • Two new monitors

    If you’d like to see my entire parts list with links to each item to check it out or two make a purchase for yourself, feel free to head over to my website for a complete list.

    Once my Christmas-in-the-Summer arrived with a whole slew of boxes on my doorstep, the real fun could begin. The first step was finishing wiring my ethernet throughout my home. The installers had not connected any ethernet cables to the cable box by default, so I had to cut the ends and install the jacks myself. Fortunately, the AWESOME toolkit I purchased (link on my site) included the crimp tool, the RJ45 ends, and testing equipment to ensure I wired the ends right and to identify which port around my apartment correlated to which wire. Of course, with my luck, the very last of 8 wires ended up being the one I needed for my office, but the future tenants of my place will benefit from my good deed for the day I guess. The entire process took around 2–3 hours of wiring the gigabit connections but fortunately, my girlfriend enjoyed helping and a glass of wine made it go by faster.

    Following wired networking, I began to set up my office by building the furniture, installing the lighting, and unpacking the hardware. My desk setup turned out pretty clean, and I’m happy with how my office now looks.

    Before and After

    As for my hardware setup, each of the computers I purchased had 16GB of RAM I upgraded to 32 as well as Solid State Drives (a few I upgraded). Since every device is running Windows 10 Pro, I am able to remote login in my network as well and I set up some of my service already. Networking the devices was quite fun as well, although I think my cable management leaves a little room for improvement.

    Front of Home Lab Nodes
    Back of Home Lab Nodes

    Now per the asterisk I had in the beginning, why did I spend around a year’s worth of AWS costs on five computers with like 22 cores total rather than just buy/build a tricked-out modern PC? Well, there are a few reasons, and I’m sure this may be divisive with some of the other tech geeks in the room.

    1. Scalability — I can easily add another node to my cluster here or remove one for maintenance/upgrades.
    2. Cost — It is easy and cheap to upgrade and provide maintenance. Additionally, at around 35W max for most units, the cost of running my servers is very affordable.
    3. Redundancy — If one node goes down (ie, a CPU dies), I have correcting scripts to balance my distributed workloads.
    4. Education — I am learning a significant amount that furthers my professional skills and experience, and education is ✨invaluable✨.
    5. It looks cool. Point number 5 here should be enough justification alone.

    Speaking of education though, here are some of the things I learned and implemented in my cluster:

    • When cloning drives from smaller to larger, you will need to extend the new drive’s volumes which frequently requires 3rd party software to do easily (such as Paragon).
    • You need to manually assign static IPs to get reliable results when remoting between desktops.
    • When migrating SQL servers, restoring from a backup is easier than querying between two different servers.

    I’m sure there will be many more lessons I will learn along the way…

    Below is an approximate diagram of my home network now. Not pictured are my wifi devices such as my MacBook and phone, but they jump between the two routers pictured. Eventually, I will also be adding my single-board computers and possibly one more PC to the cluster. Oh yeah, and my old broken-screen-laptop? Nobody wanted to buy it on Facebook Marketplace for even $50 so I installed Windows 10 Pro on it for remote access and added it to the cluster too for good measure, and that actually could be a good thing because I can use its GPU to assist in building Tensorflow models (and play a few turn-based games as well).

    Home Lab Network Diagram

    Speaking of Tensorflow, here are some of the services and functions I will be implementing in my new home lab:

    • The SQL server (currently hosting my financial datasets, as well as new datasets I am web scraping and will later write about including my alma mater’s finances and the city I am living in’s public safety datasets)
    • Docker (for hosting apps/containers I will be building as well as a Minecraft server, because, why not)
    • Jenkins CI/CD system to build, train, and deploy Machine Learning models on my datasets
    • Git Repo for my personal codebase
    • Network Attached Storage supporting my many photos from my photography hobby, documents, and any other data-hoarding activities
    • And other TBD projects/services

    Closing Thoughts:

    Was it worth it? Well, there is an element of “only time will tell”. Once my credit card cools off from my Amazon fulfillment purchases I’m sure it will enjoy the reprieve from AWS pricing as well. I am also looking forward to being able to build and deploy more of my hobbies, as well as collect more data to write more Medium articles about. Some of my next few planned articles include an analysis of the debt West Virginia University is currently facing financially as well as an exploratory data analysis of Nashville’s public safety reporting (and possibly an ML model for anticipating emergency events and allocating resource needs). These data science projects are large enough that they would not be possible without some sort of architecture for storing and querying the massive amount of related data.

    What do you think? Does leaving the cloud and building a home lab sound like a project you would want to do? What would your hardware choice be?

    If you’re curious about the hardware I used, check out my reviews at www.willkeefe.com

    Some of my related recent Medium content:

    Production Planning and Resource Management of Manufacturing Systems in Python

    Efficient supply chains, production planning, and resource allocation management are more important than ever. Python…

    towardsdatascience.com

    Crime Location Analysis and Prediction Using Python and Machine Learning

    Using Python, Folium, and ScyPy, models can be built to illustrate crime incidents, calculate the best locations for…

    towardsdatascience.com

    Data Science

    Data

    Programming

    Serverless

    Homelab

    Will Keefe

    Written by Will Keefe

    ·Writer for

    Towards Data Science

    Engineer, python enthusiast, and fintech hobbyist.

    Crime Location Analysis and Prediction Using Python and Machine Learning

    Using Python, Folium, and ScyPy, models can be built to illustrate crime incidents, calculate the best locations for safety event resource…

    How I Turned My Company’s Docs into a Searchable Database with OpenAI

    And how you can do the same with your docs

    Tabulating Subtotals Dynamically in Python Pandas Pivot Tables

    One of the current disadvantages of the Pandas library’s built-in pivot functions is the lack of gathering subtotals dynamically for…

    System Design Blueprint: The Ultimate Guide

    Developing a robust, scalable, and efficient system can be daunting. However, understanding the key concepts and components can make the…

    🐼Introducing PandasAI: The Generative AI Python Library 🐼

    Pandas AI is an additional Python library that enhances Pandas, the widely-used data analysis and manipulation tool, by incorporating…

    The Right Way to Run Shell Commands From Python

    These are all the options you have in Python for running other processes — the bad, the good, and most importantly, the right way to do it

  • How to test Laravel with Sanctum API using the Postman

    In the last part, we completed the Laravel Breeze API installation and validated the API using the Breeze Next front end.

    In this blog, we going to test the API using the Postman application.

    About Postman

    Postman is software used to test the API by sending and receiving the request with multiple data formats along with auth.

    Postman is an API platform for building and using APIs. Postman simplifies each step of the API lifecycle and streamlines collaboration so you can create better APIs — faster.

    Install Postman

    Click here and complete your Postman installation. After installation opens the Postman application.

    Create a new Postman Collection

    Click the create new button

    In the popup window click the collections

    Enter the name “Laravel Admin API” and select auth type that is No auth.


    Pre-request Script

    In the Laravel Sanctum, we used SPA authentication. So it works by using Laravel’s built-in cookie-based session authentication services. So, we need to set the cookie for all the requests in Postman.

    We can set the cookie by using the Postman Pre-request Script. Add the below code to Pre-request Script.

    pm.sendRequest({
        url: pm.collectionVariables.get('base_url')+'/sanctum/csrf-cookie',
        method: 'GET'
    }, function (error, response, {cookies}) {
        if (!error){
            pm.collectionVariables.set('xsrf-cookie', cookies.get('XSRF-TOKEN'))
        }
    })

    In this script, we used some variables. We will create variables in the next step.


    Postman Variables

    Add the host, base_url, and xsrf-cookie variables in the Postman variables section


    Postman Add Request

    Click the “Add a request” link and create a new request for registration.

    In the header section add the “Accept” and “X-XSRF-TOKEN” like below

    Also, you can add plain text values by clicking the “Bulk Edit”

    Accept:application/json
    X-XSRF-TOKEN:{{xsrf-cookie}}

    In the request Body, add the below values on form-data

    name:admin
    email:user1@admin.com
    password:password
    password_confirmation:password

    Register API request

    Click the “send” button

    You will get an empty response if the user is registered successfully


    Get User API request

    Now we going to create an API to get the current user details. Click and create a New Request with the get method.

    The API URL is /api/user and also add the below headers.

    Accept:application/json
    Referer:{{host}}

    For this request, the body is none, and then click send the request. You will get the current user details in the response.


    Logout Request

    Create the logout request with /logout URL with the post method. Also, add the headers.

    Accept:application/json
    X-XSRF-TOKEN:{{xsrf-cookie}}

    You will get an empty response after sending the request.


    Login Request

    We have completed the user register, get the user, and logout. Only login is pending.

    Header

    Accept:application/json
    X-XSRF-TOKEN:{{xsrf-cookie}}

    Body: Select form-data and insert the below values

    email:user1@admin.com
    password:password

    We have created 4 requests in Postman and validated our Admin API. You can import the below-exported data and use it in the Postman. Next part we add permission and roles to our admin API.

    {
     "info": {
      "_postman_id": "6822504e-2244-46f9-bba8-115dc36644f6",
      "name": "Laravel Admin API",
      "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json",
      "_exporter_id": "25059912"
     },
     "item": [
      {
       "name": "Register",
       "request": {
        "method": "POST",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "X-XSRF-TOKEN",
          "value": "{{xsrf-cookie}}",
          "type": "text"
         }
        ],
        "body": {
         "mode": "formdata",
         "formdata": [
          {
           "key": "name",
           "value": "admin",
           "type": "text"
          },
          {
           "key": "email",
           "value": "user1@admin.com",
           "type": "text"
          },
          {
           "key": "password",
           "value": "password",
           "type": "text"
          },
          {
           "key": "password_confirmation",
           "value": "password",
           "type": "text"
          }
         ]
        },
        "url": {
         "raw": "{{base_url}}/register",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "register"
         ]
        }
       },
       "response": []
      },
      {
       "name": "User",
       "request": {
        "method": "GET",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "Referer",
          "value": "{{host}}",
          "type": "text"
         }
        ],
        "url": {
         "raw": "{{base_url}}/api/user",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "api",
          "user"
         ]
        }
       },
       "response": []
      },
      {
       "name": "Logout",
       "request": {
        "method": "POST",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "X-XSRF-TOKEN",
          "value": "{{xsrf-cookie}}",
          "type": "text"
         }
        ],
        "url": {
         "raw": "{{base_url}}/logout",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "logout"
         ]
        }
       },
       "response": []
      },
      {
       "name": "Login",
       "request": {
        "method": "POST",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "X-XSRF-TOKEN",
          "value": "{{xsrf-cookie}}",
          "type": "text"
         }
        ],
        "body": {
         "mode": "formdata",
         "formdata": [
          {
           "key": "email",
           "value": "user1@admin.com",
           "type": "text"
          },
          {
           "key": "password",
           "value": "password",
           "type": "text"
          }
         ]
        },
        "url": {
         "raw": "{{base_url}}/login",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "login"
         ]
        }
       },
       "response": []
      }
     ],
     "event": [
      {
       "listen": "prerequest",
       "script": {
        "type": "text/javascript",
        "exec": [
         "pm.sendRequest({",
         "    url: pm.collectionVariables.get('base_url')+'/sanctum/csrf-cookie',",
         "    method: 'GET'",
         "}, function (error, response, {cookies}) {",
         "    if (!error){",
         "        pm.collectionVariables.set('xsrf-cookie', cookies.get('XSRF-TOKEN'))",
         "    }",
         "})"
        ]
       }
      },
      {
       "listen": "test",
       "script": {
        "type": "text/javascript",
        "exec": [
         ""
        ]
       }
      }
     ],
     "variable": [
      {
       "key": "host",
       "value": "localhost:3000",
       "type": "string"
      },
      {
       "key": "base_url",
       "value": "http://localhost",
       "type": "string"
      },
      {
       "key": "xsrf-cookie",
       "value": "",
       "type": "string"
      }
     ]
    }
    

    Also, all the Request is available in below Postman public workspace

    https://www.postman.com/balajidharma/workspace/laravel-admin-api/collection/25059912-6822504e-2244-46f9-bba8-115dc36644f6?action=share&creator=25059912

  • Add Role and Permissions based authentication to Laravel API

    To manage roles & permission, we going to add the Spatie Laravel-permission package to our Laravel Admin API.

    The following steps are involved to install the Laravel permission package for our Laravel Admin API.

    • Install Spatie Laravel-permission package
    • Publish the configuration and migration file
    • Running Migration

    Install Spatie Laravel-permission package

    Install the package using the composer command

    ./vendor/bin/sail composer require spatie/laravel-permission

    Publish the configuration and migration file

    The vendor:publish artisan command is used to publish the package configuration to the config folder. Also, copy the migration files to the migration folder.

    ./vendor/bin/sail artisan vendor:publish --provider="Spatie\Permission\PermissionServiceProvider"

    Running Migration

    Run the migrations using artisan migrate

    ./vendor/bin/sail artisan migrate

    Now we need to add some roles & permission. Then need to assign the role to users. So we need to create seeders.

    I created an Admin core package with seeders and common functionality when I was working on Basic Laravel Admin Panel & Laravel Vue admin panel

    Add the admin core package to our Admin API

    ./vendor/bin/sail composer require balajidharma/laravel-admin-core

    This admin core package will install the Laravel Menu package. So run the below publish commands

    ./vendor/bin/sail artisan vendor:publish --provider="BalajiDharma\LaravelAdminCore\AdminCoreServiceProvider"
    ./vendor/bin/sail artisan vendor:publish --provider="BalajiDharma\LaravelMenu\MenuServiceProvider"

    Now run the migration with the seeder

    ./vendor/bin/sail artisan migrate --seed --seeder=AdminCoreSeeder

    The seeder throws the error

    We need to add HasRoles Traits in the user model. Open the app/Models/User.php

    <?php
    
    .
    .
    .
    .
    .
    use Spatie\Permission\Traits\HasRoles;
    
    class User extends Authenticatable
    {
        use HasApiTokens, HasFactory, Notifiable, HasRoles;
    
        /**
         * The attributes that are mass assignable.
         *

    Try again to run the seeder with migrate:fresh. So it will drop all tables and re-run all of our migrations.

    ./vendor/bin/sail artisan migrate:fresh --seed --seeder=AdminCoreSeeder

    Open the Postman application and test the new user login. In the login, change the form data to the below email and password

    Email — superadmin@example.com

    Password — password

    After login, runs the get user request. You will get the super admin details on the response.


    We will create an API for Permission CRUD operations in the next blog.

  • How to Upgrade From Laravel 9.x to Laravel 10.x

    The Laravel 10 has been released on Feb 14. Laravel 10 requires a minimum PHP version of 8.1. Read more about the release on the Laravel release notes.

    Our Basic Laravel Admin Panel currently has Laravel 9.x, So time to upgrade to Laravel 10.

    Laravel Upgrade From 9.x to 10.x

    Laravel upgrade involved the following steps.

    • Update PHP version
    • Composer version update
    • Update Composer Dependencies
    • Update composer Minimum Stability
    • Update Docker composer

    All the upgrade steps are available on the official Laravel document.


    Update PHP version

    Laravel 10 requires PHP 8.1.0 or greater. So update your PHP version. If you using the PHP version below 8.1.

    Now we will check out the Admin Panel PHP version. The PHP version will display the Admin panel or Laravel default home page

    You can also check the PHP version & Laravel versions in the command line busing below the command

    PHP version

    ./vendor/bin/sail php -v
    
    // or
    
    ./vendor/bin/sail php --version
    
    //If you not using sail
    php -v

    Laravel Version

    ./vendor/bin/sail artisan -v
    
    //or
    
    ./vendor/bin/sail artisan --version
    
    //If you not using sail
    php artisan --version

    Also, you can check the Laravel version on the ./vendor/laravel/framework/src/Illuminate/Foundation/Application.php file.

    Our Laravel Admin Panel is using the Laravel sail (Docker development environment). So we need to update the PHP in the docker-compose.yml file. We update it at the end of the step.


    Composer version update

    Laravel 10 requires Composer 2.2.0 or greater. If you using a lower version, uninstall and install a new version.

    You can check your composer version using the below commands

    composer -v
    
    composer -vvv about

    if you using the sail try below

    ./vendor/bin/sail composer -v
    
    ./vendor/bin/sail composer -vvv about

    We already have the composer version above 2.2.0.


    Update Composer Dependencies

    For Laravel 10, we need to update the following dependencies in our application’s composer.json file

    • laravel/framework to ^10.0
    • spatie/laravel-ignition to ^2.0
    • php to ^8.1

    Admin Panel updated below following dependencies

    diff --git a/composer.json b/composer.json
    index 381f15d..b0be0bc 100644
    --- a/composer.json
    +++ b/composer.json
    @@ -5,12 +5,12 @@
         "keywords": ["framework", "laravel", "boilerplate", "admin panel"],
         "license": "MIT",
         "require": {
    -        "php": "^8.0.2",
    +        "php": "^8.1",
             "balajidharma/laravel-admin-core": "^1.0",
             "guzzlehttp/guzzle": "^7.2",
    -        "laravel/framework": "^9.19",
    -        "laravel/sanctum": "^2.14.1",
    -        "laravel/tinker": "^2.7",
    +        "laravel/framework": "^10.0",
    +        "laravel/sanctum": "^3.2",
    +        "laravel/tinker": "^2.8",
             "spatie/laravel-permission": "^5.5"
         },
         "require-dev": {
    @@ -19,11 +19,11 @@
             "laravel/breeze": "^1.7",
             "laravel/dusk": "^7.1",
             "laravel/pint": "^1.0",
    -        "laravel/sail": "^1.0.1",
    +        "laravel/sail": "^1.18",
             "mockery/mockery": "^1.4.4",
    -        "nunomaduro/collision": "^6.1",
    -        "phpunit/phpunit": "^9.5.10",
    -        "spatie/laravel-ignition": "^1.0"
    +        "nunomaduro/collision": "^7.0",
    +        "phpunit/phpunit": "^10.0",
    +        "spatie/laravel-ignition": "^2.0"
         },
         "autoload": {
             "psr-4": {

    Update composer Minimum Stability

    One more change on the composer file. The minimum-stability setting needs to updatestable

    "minimum-stability": "stable",

    After the composer changes do the composer update

    ./vendor/bin/sail composer update

    Now open the application home page.

    If you need an updated welcome page means, copy the https://raw.githubusercontent.com/laravel/laravel/10.x/resources/views/welcome.blade.php and update the resources/views/welcome.blade.php


    Update Docker composer

    We going to update docker-compose.yml with the latest changes on Laravel.

    The latest Laravel sail is using PHP version 8.2. find below the final version of docker-compose.yml

    # For more information: https://laravel.com/docs/sail
    version: '3'
    services:
        laravel.test:
            build:
                context: ./vendor/laravel/sail/runtimes/8.2
                dockerfile: Dockerfile
                args:
                    WWWGROUP: '${WWWGROUP}'
            image: sail-8.2/app
            extra_hosts:
                - 'host.docker.internal:host-gateway'
            ports:
                - '${APP_PORT:-80}:80'
                - '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
            environment:
                WWWUSER: '${WWWUSER}'
                LARAVEL_SAIL: 1
                XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
                XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
            volumes:
                - '.:/var/www/html'
            networks:
                - sail
            depends_on:
                - mysql
                - redis
                - meilisearch
                - mailpit
                - selenium
        mysql:
            image: 'mysql/mysql-server:8.0'
            ports:
                - '${FORWARD_DB_PORT:-3306}:3306'
            environment:
                MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
                MYSQL_ROOT_HOST: "%"
                MYSQL_DATABASE: '${DB_DATABASE}'
                MYSQL_USER: '${DB_USERNAME}'
                MYSQL_PASSWORD: '${DB_PASSWORD}'
                MYSQL_ALLOW_EMPTY_PASSWORD: 1
            volumes:
                - 'sail-mysql:/var/lib/mysql'
            networks:
                - sail
            healthcheck:
                test:
                    - CMD
                    - mysqladmin
                    - ping
                    - '-p${DB_PASSWORD}'
                retries: 3
                timeout: 5s
        redis:
            image: 'redis:alpine'
            ports:
                - '${FORWARD_REDIS_PORT:-6379}:6379'
            volumes:
                - 'sail-redis:/data'
            networks:
                - sail
            healthcheck:
                test:
                    - CMD
                    - redis-cli
                    - ping
                retries: 3
                timeout: 5s
        meilisearch:
            image: 'getmeili/meilisearch:latest'
            ports:
                - '${FORWARD_MEILISEARCH_PORT:-7700}:7700'
            volumes:
                - 'sail-meilisearch:/meili_data'
            networks:
                - sail
            healthcheck:
                test:
                    - CMD
                    - wget
                    - '--no-verbose'
                    - '--spider'
                    - 'http://localhost:7700/health'
                retries: 3
                timeout: 5s
        mailpit:
            image: 'axllent/mailpit:latest'
            ports:
                - '${FORWARD_MAILPIT_PORT:-1025}:1025'
                - '${FORWARD_MAILPIT_DASHBOARD_PORT:-8025}:8025'
            networks:
                - sail
        selenium:
            image: 'selenium/standalone-chrome'
            extra_hosts:
                - 'host.docker.internal:host-gateway'
            volumes:
                - '/dev/shm:/dev/shm'
            networks:
                - sail
        phpmyadmin:
            image: phpmyadmin/phpmyadmin
            links:
                - mysql:mysql
            ports:
                - 8080:80
            environment:
                MYSQL_USERNAME: "${DB_USERNAME}"
                MYSQL_ROOT_PASSWORD: "${DB_PASSWORD}"
                PMA_HOST: mysql
            networks:
                - sail
    networks:
        sail:
            driver: bridge
    volumes:
        sail-mysql:
            driver: local
        sail-redis:
            driver: local
        sail-meilisearch:
            driver: local

    We have successfully upgraded our Admin Panel to Laravel 10.x


  • Laravel: Automate Code Formatting!

    Pint is one the newest members of Laravel first-party packages and will help us to have more readable and consistent codes.

    Installing and Configuring Laravel Pint is so easy and It is built on top of PHP-CS-Fixer so it has tones of rules to fix code style issues. (You don’t need Laravel 9 to use Pint and it’s a zero dependency package)

    But running Pint is quite painful because every time we want to push our changes to the remote repository we have to run below command manually:

    ./vendor/bin/pint --dirty

    The --dirty flag will run PHP-CS-Fixer for changed files only. If we want to check styles for all files just remove --dirty flag.

    In this article we want to simply automate running code styles check with Pint before committing any changed file so even team developers will have a well defined code structure and don’t need to run Laravel Pint every time before we push our codes to remote repo!

    Before we start, be careful this is a very simple setup and you can add as many options as you want to Laravel Pint.

    In order to run ./vendor/bin/pint --dirty just before every commit, we should use the pre-commit hook inside .git folder.

    First of all we will create a scripts folder inside our root Laravel directory. In this folder we will have a setup.sh file and pre-commit file without any extension.

    scripts/
    setup.sh
    pre-commit

    Inside our setup.sh we have:

    #! /usr/bin/env bash
    
    cp scripts/pre-commit .git/hooks/pre-commit
    chmod +x .git/hooks/pre-commit

    And write the following lines on pre-commit file:

    #! /usr/bin/env bash
    
    echo "Check php code styles..."
    echo "Running PHP cs-fixer"
     ./vendor/bin/pint --dirty
     git add .
    echo "Done!"

    Second of all, we should go to composer.json file and on the scripts object add this line: (If post-install-cmd key does not exist, you should create post-install-cmd part and then add below)

    "post-install-cmd": [
                "bash scripts/setup.sh"
            ]

    Third of all, we will require Pint package by this:

    composer require laravel/pint --dev

    And To be sure Don’t Forget to run:

    composer install

    The composer install command will add the pre-commit hook to our .git folder and after that we are ready to go!

    From now on, we can simply write our code and just before we commit our changes the Pint command will run automatically and will fix our code styles!

    Pint use Laravel code styles as defaultbut if you want to use psr-12 like me, you can create a pint.json file inside the root directory of your Laravel project and copy below json to have a more opinionated PHP code styles:

    {
        "preset": "psr12",
        "rules": {
            "simplified_null_return": true,
            "blank_line_before_statement": {
                "statements": ["return", "try"]
            },
            "binary_operator_spaces": {
                "operators": {
                    "=>": "align_single_space_minimal"
                }
            },
            "trim_array_spaces": false,
            "new_with_braces": {
                "anonymous_class": false
            }
        }
    }

    This is a simple config for our Pint command and will simplify null returns and define an equal indentation for arrays. You can check all PHP-CS-Fixer options here!

    READ MORE:

  • Laravel works with Large database records using the chunk method

    Your application database records will increase by every day. As a developer, we faced performance and server memory issues when working with large table records. In this blog, we going to process the large table records and explain the importance of the Eloquent chunk method.

    We need a demo application to work with large records.

    Laravel Installation

    As usual, we going to install Basic Laravel Admin Panel locally. This Basic admin comes with users with roles and permissions.

    The Basic Laravel Admin Panel is based on Laravel Sail. What is Sail? Sail is a built-in solution for running your Laravel project using Docker.

    Refer to the https://github.com/balajidharma/basic-laravel-admin-panel#installation step and complete the installation.


    Demo data

    For demo records, we going to create dummy users on the user’s table using the Laravel seeder. To generate a seeder, execute the make:seeder Artisan command.

    ./vendor/bin/sail php artisan make:seeder UserSeeder
    
    INFO Seeder [database/seeders/UserSeeder.php] created successfully.

    Open the generated seeder file located on database/seeders/UserSeeder.php and update with the below code.

    <?php
    namespace Database\Seeders;
    use Illuminate\Database\Seeder;
    use Illuminate\Support\Facades\DB;
    use Illuminate\Support\Facades\Hash;
    use Illuminate\Support\Str;
    class UserSeeder extends Seeder
    {
        /**
         * Run the database seeds.
         *
         * @return void
         */
        public function run()
        {
            for ($i=0; $i < 1000; $i++) {
                DB::table('users')->insert([
                    'name' => Str::random(10),
                    'email' => Str::random(10).'@gmail.com',
                    'password' => Hash::make('password'),
                ]);
            }
        }
    }

    Now run the seeder using the below Artisan command. It will take extra time to complete the seeding.

    ./vendor/bin/sail php artisan db:seed --class=UserSeeder

    After the Artisan command, verify the created users on the user list page http://localhost/admin/user


    Processing large records

    Now we going to process the large user records. Assume we need to send black Friday offers notifications emails to all the users. Usually, we generate new Artisan command and send the email by using the scheduler job.

    Memory issue

    We will fetch all the users and send emails inside each loop.

    $users = User::all();
    $users->each(function ($user, $key) {
        echo $user->name;
    });

    If you have millions of records or if your result collection has a lot of relation data means, your server will throw the Allowed memory size of bytes exhausted error.

    To overcome this issue we will process the limited data by saving the limit in the database or cache.

    Example: First time we fetch the 100 records and save the 100 on the database table.
    Next time fetch 100 to 200 records and save the 200 in the database. So this method involved additional fetch and update. Also, we need to stop the job once processed all the records.

    Laravel provides the inbuild solution of the Eloquent chunk method to process the large records


    Laravel Eloquent chunk method

    The Laravel ELoquent check method retrieves a small chunk of results at a time and feeds each chunk into a Closure for processing.

    User::chunk(100, function ($users) {
        foreach ($users as $user) {
            echo $user->name;
        }
    });

    Understand the chunk method

    I will create one function in the user controller and explain the check method detailed.

    Open the routes/admin.php and add the below route

    Route::get('send_emails', 'UserController@sendEmails');

    Now open the app/Http/Controllers/Admin/UserController.php and add the sendEmails method.

    Without chunk:
    After adding the below code open the http://localhost/admin/send_emails page

    public function sendEmails()
    {
        $users = User::all();
        $users->each(function ($user, $key) {
            echo $user->name;
        });
    }

    Open the Laravel Debugbar queries panel. The select * from users will fetch all the 1000+ records.

    With chunk method:
    Replace the same function with the below code and check the page in the browser.

    public function sendEmails()
    {
        User::chunk(100, function ($users) {
            foreach ($users as $user) {
                echo $user->name;
            }
        });
    }

    The chunk method adds limits and processes all the records. So if using chunk, it processes 100 records collection at the time. So no more memory issues.


    What is chunkById?

    This chunkById the method will automatically paginate the results based on the record’s primary key. To understand it, again update the sendEmails the method with the below code

    public function sendEmails()
    {
        User::chunkById(100, function ($users) {
            foreach ($users as $user) {
                echo $user->name;
            }
        });
    }

    Now user id is added on where condition along with the limit of 100.

    // chunkById
    select * from `users` where `id` > 100 order by `id` asc limit 100
    select * from `users` where `id` > 200 order by `id` asc limit 100
    select * from `users` where `id` > 300 order by `id` asc limit 100
    // chunk
    select * from `users` order by `users`.`id` asc limit 100 offset 0
    select * from `users` order by `users`.`id` asc limit 100 offset 100
    select * from `users` order by `users`.`id` asc limit 100 offset 200

    This chunkById is recommended when updating or deleting records inside the closure (in the loop).


    Conclusion

    The Eloquent chunk method is a very useful method when you work with large records. Also, read about the collection check method.

  • Restructuring a Laravel controller using Services & Action Classes

    Laravel Refactoring — Laravel creates an admin panel from scratch — Part 11

    In the previous part, we moved the UserController store method validation to Form Request. In this part, we going to explore and use the new trending Actions and Services Classes.

    We going to cover the below topic in the blog

    • Laravel project structure
    • Controller Refactoring
    • Service Class
      • What is Service Class
      • Implement Service Class
    • Action Class
      • Implement Action Class
    • Advantages of Services & Action Classes
    • Disadvantages of Services & Action Classes
    • Conclusion

    Laravel project structure

    Laravel does not restrict your project structure also they do not suggest any project structure. So, you have the freedom to choose your project structure.

    Laravel gives you the flexibility to choose the structure yourself

    We will explore both Services & Action Classes and we use these classes in our Laravel basic admin panel.

    Controller Refactoring

    The UserController the store function does the below 3 actions.

    public function store(StoreUserRequest  $request)
    {
        // 1.Create a user
        $user = User::create([
            'name' => $request->name,
            'email' => $request->email,
            'password' => Hash::make($request->password)
        ]);
        // 2.Assign role to user
        if(! empty($request->roles)) {
            $user->assignRole($request->roles);
        }
        // 3.Redirect with message
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }
    

    To further refactor, we can move the logic to another class method. This new class is called Services & Action Classes. We will see them one by one.


    Services Class

    We decided to move the logic to another class. The Laravel best practices are suggested to move business logic from controllers to service classes due to the Single-responsibility principle (SRP). The Service class is just a common PHP class to put all our logic.

    What is Service Class

    A service is a very simple class and it is not extended with any class. So, it is just a standalone PHP class.

    We going to create a new app/Services/Admin/UserService.php service class with the createUser method. This is a custom PHP class in Laravel, so no artisan command. We need to create it manually.

    Implement Service Class

    app/Services/Admin/UserService.php

    <?php
    namespace App\Services\Admin;
    
    use App\Models\User;
    use Illuminate\Support\Facades\Hash;
    
    class UserService
    {
        public function createUser($data): User
        {
            $user = User::create([
                'name' => $data->name,
                'email' => $data->email,
                'password' => Hash::make($data->password),
            ]);
    
            if(! empty($data->roles)) {
                $user->assignRole($data->roles);
            }
    
            return $user;
        }
    }

    Then, in the UserController call this method. For the Automatic Injection, you may type-hint the dependency in the controller.

    Blog Updated: Earlier I passed the $request (function createUser(Request $request)) directly to the service class. The service can use by other methods. So $request is converted to an object and passed as params.

    app/Http/Controllers/Admin/UserController.php

    use App\Services\Admin\UserService;
    public function store(StoreUserRequest $request, UserService $userService)
    {
        $userService->createUser((object) $request->all());
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }

    We can do some more refactoring on UserService Class by moving the user role saving to the new method.

    app/Services/Admin/UserService.php

    class UserService
    {
        public function createUser($data): User
        {
            $user = User::create([
                'name' => $data->name,
                'email' => $data->email,
                'password' => Hash::make($data->password),
            ]);
            return $user;
        }
        public function assignRole($data, User $user): void
        {
            $roles = $data->roles ?? [];
            $user->assignRole($roles);
        }
    }

    app/Http/Controllers/Admin/UserController.php

    public function store(StoreUserRequest $request, UserService $userService)
    {
        $data = (object) $request->all();
        $user = $userService->createUser($data);
        $userService->assignRole($data, $user);
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }

    Now we implemented the Service class. We will discuss the benefit at the end of the blog.

    Click here to view examples of service classes used on Laravel


    Action Class

    In the Laravel community, the concept of Action classes got very popular in recent years. An action is a very simple PHP class similar to the Service class. But Action class only has one public method execute or handle Else you could name that method whatever you want.

    Implement Action Class

    We going to create a new app/Actions/Admin/User/CreateUser.php Action class with the single handle method.

    app/Actions/Admin/User/CreateUser.php

    <?php
    
    namespace App\Actions\Admin\User;
    
    use App\Models\User;
    use Illuminate\Support\Facades\Hash;
    
    class CreateUser
    {
        public function handle($data): User
        {
            $user = User::create([
                'name' => $data->name,
                'email' => $data->email,
                'password' => Hash::make($data->password),
            ]);
    
            $roles = $data->roles ?? [];
            $user->assignRole($roles);
    
            return $user;
        }
    }

    Now call this handle method on UserController. The method injection to resolve CreateUser.

    app/Http/Controllers/Admin/UserController.php

    public function store(StoreUserRequest $request, CreateUser $createUser)
    {
        $createUser->handle((object) $request->all());
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }

    The biggest advantage of this Action class we don’t worry about the function name. Because it should always single function like handle


    Advantages of Services & Action Classes

    • Code reusability: We can call the method on the Artisan command and also easy to call other controllers.
    • Single-responsibility principle (SRP): Achieved SRP by using Services & Action Classes
    • Avoid Conflict: Easy to manage code for larger applications with a large development team.

    Disadvantages of Services & Action Classes

    • Too many classes: We need to create too many classes for single functionality
    • Small Application: Not recommended for smaller applications

    Conclusion

    As said earlier, Laravel gives you the flexibility to choose the structure yourself. The Services and Action classes are one of the structure methods. It should be recommended for large-scale applications to avoid conflict and do faster releases.

    For the Laravel Basic Admin Panel, I am going with the Actions classes.

    The Laravel admin panel is available at https://github.com/balajidharma/basic-laravel-admin-panel. Install the admin panel and share your feedback.

    Thank you for reading.

    Stay tuned for more!

    Follow me at balajidharma.medium.com.


    References

    https://freek.dev/1371-refactoring-to-actions
    https://laravel-news.com/controller-refactor
    https://farhan.dev/tutorial/laravel-service-classes-explained/
  • Resize ext4 file system

    Using Growpart

    $ growpart /dev/sda 1
    CHANGED: partition=1 start=2048 old: size=39999455 end=40001503 new: size=80000991,end=80003039
    $ resize2fs /dev/sda1
    resize2fs 1.45.4 (23-Sep-2019)
    Filesystem at /dev/sda1 is mounted on /; on-line resizing required
    old_desc_blocks = 3, new_desc_blocks = 5
    The filesystem on /dev/sda1 is now 10000123 (4k) blocks long.

    Using Parted & resize2fs

    apt-get -y install parted
    parted /dev/vda unit s print all # print current data for a case
    parted /dev/vda resizepart 2 yes -- -1s # resize /dev/vda2 first
    parted /dev/vda resizepart 5 yes -- -1s # resize /dev/vda5
    partprobe /dev/vda # re-read partition table
    resize2fs /dev/vda5 # get your space

    Parted doesn’t work on ext4 on Centos. I had to use fdisk to delete and recreate the partition, which (I validated) works without losing data. I followed the steps at http://geekpeek.net/resize-filesystem-fdisk-resize2fs/. Here they are, in a nutshell:

    $ sudo fdisk /dev/sdx
    > c
    > u
    > p
    > d
    > p
    > w
    $ sudo fdisk /dev/sdx
    > c
    > u
    > p
    > n
    > p
    > 1
    > (default)
    > (default)
    > p
    > w

    sumber: https://serverfault.com/questions/509468/how-to-extend-an-ext4-partition-and-filesystem

  • 1. Why ?

    There are lot of ways of how you can manage you company, home or corporate DNS zones. You can offload this task to any DNS registar, you can use any available DNS server software with any back-end that you like, or … you can use Zabbix and particularly Zabbix database as your trusty backend. Let’s look at the simple fact, that you already installed and configured Zabbix on your network. And you invested considerable time and effort of doing so. And. looking inside Zabbix, you see, that it knows a great deal about your infrastructure. It’s host names and IP addresses. Maybe, you are also running discovery process on your network and keeping this portion of configuration up-to date. Maybe you already integrate Zabbix with your inventory system. And with your ticketing system. If you did not done that already, maybe you should. So, your Zabbix installation already one of the central points of your enterprise management. Any reason, why you still using vi to manage your DNS zones or paying somebody to do this for you, when you have all you needed at your fingertips ?

    2. What you will need ?

    Aside from Zabbix itself, not much:

    Some time and software development skills …

    3. Prepare your environment.

    I will not be covering on how to install and configure Python on your target hosts. You can install it from rpm/deb repositories or compile yourself from the scratch. Second, download unbound DNS resolver and compile it. I am doing this using command

    ./configure --with-libevent --with-pyunbound --with-pthreads --with-ssl --with-pythonmodule

    Please note, that you shall have development files for libevent, openssl, posix threads and the Python on your host.

    Next, compile and install REDIS server. I will leave you with excellent Redis documentation as your guide through this process. All I want to say: “It is not difficult to do”. After you’ve compiled and installed Redis, install Python redis module – redis-py.

    4. Overview of the design.

    You will have number of components on your Zabbix-DNS infrastructure.

    • REDIS servers. Will be serving as a primary storage for you direct and reverse mappings. Depending on the size of your DNS zones, you may want to scale the memory for the hosts on which you will run your
      REDIS servers. All REDIS servers are configured for persistency.
    • DNS_REDIS_SYNC. Script, which will query SQL table interfaces from zabbix database and populate master REDIS server.
    • resolver.py. Unbound python script, which will provide a proper interfacing between zabbix database, REDIS and UNBOUND resolver

    5. Masters and slaves.

    I am intentionaly insisiting on more complicated master-slave configuration for your installation. When you will need to scale your DNS cluster, you will appretiate that you’ve done this. Depending on your Zabbix configuration, you may be choosing appropriate location for your master REDIS server and DNS_REDIS_SYNC process.

    Depending on the size of your Zabbix and number of NVPS, you may consider to perform “select” operations on SQL table “interface” on the less busy with inserts and updates slave MySQL server.

    How to setup master-slave MySQL replication is outside of the scope of this article.

    Google it. Slave REDIS node shall be local to a DNS resolver.

    6. DNS_REDIS_SYNC

    DNS_REDIS_SYNC is a simple Python (or whatever language you choose to use, as long as it can interface with MySQL and REDIS) script, which designed to populate master REDIS storage. In order to get information from table interface, you may issue query

    select interfaceid,ip,dns from interface where type = 1

    When you’ve got all you Name->IP associations from Zabbix database, start to populate direct and reverse zones in REDIS, like

    SET A:%(name) %(ip)

    SET PTR:%(ip) %(name)

    you do not want keys to stick in you REDIS forever, so I recommend to set conservative expiration for your keys. See Chapter #7

    EXPIRE A:%(name) %(expiration_time_in_sec)

    EXPIRE PTR:%(ip) %(expiration_time_in_sec)

    That’s it. Your REDIS database is ready to be used by resolver.py module.

    7. Expire or not to expire.

    The easiest and more dangerous way to remove the old info from DNS zones stored in REDIS, is to use REDIS EXPIRE commands and capabilities. This will work great if you never get in the situation like this

    Downtime of the Zabbix MySQL server > Key expiration time.

    One way on how to deal with that situation is to monitor the downtime of the primary Zabbix MySQL from another Zabbix server, which configured to monitor primary server (you shall have this server already) and when downtime crosses pessimistic threshold, execute Action script, which will extend TTL for the keys in the master REDIS server.

    8. Anathomy of the resolver.py

    Before you will write your resolver.py, consult Unbund documentation on how to write unbound python modules and how to use unbound module. Also, you shall be aware of “gotcha” for the resolver.py . Since it is executed in “embedded Python”, it does not inherit information about location of the some of the python modules. Be prepared to define path to those modules using sys.path.append(…) calls.

    Main callback for the query processing inside resolver.py will be function “operate(id, event, qstate, qdata)”. Parameters are:

    • id, is a module identifier (integer);
    • event, type of the event accepted by module. Look at the documentation of which event types are there. For the resolver, we do need to catch MODULE_EVENT_PASS and MODULE_EVENT_NEW
    • qstate, is a module_qstate data structure
    • qdata , is a query_info data structure

    First, qstate.qinfo.qname_str will contain your query. The best way to detect if this is a query of the direct or reverse zones is to issue this call

    <em>socket.inet_aton(qstate.qinfo.qname_str[:-1])</em>

    and then catch the exceptions. If you have an exception, then it direct zone, of not – reverse.

    Second, you will be needed to build a return message, like this:

    msg = DNSMessage(qstate.qinfo.qname_str, RR_TYPE_A, RR_CLASS_IN, PKT_QR | PKT_RA | PKT_AA)

    Then, depend on which zone you shall query, you sent a one of the following requests to REDIS:

    GET A:%(name)

    GET PTR:%(ip)

    if REDIS returns None, you shall query Zabbix mysql database with one of the following queries:

    select interfaceid,ip,dns from interface where type = 1 and dns = ‘%(name)’;

    select interfaceid,ip,dns from interface where type = 1 and ip = ‘%(ip)’;

    If MySQL query returned data, you shall populate REDIS as described in Chapter 6, fill return message and invalidate and re-populate UNBOUND cache using following calls:

    invalidateQueryInCache(qstate, qstate.return_msg.qinfo)

    storeQueryInCache(qstate, qstate.return_msg.qinfo, qstate.return_msg.rep, 0)

    Return message is filled by appending results to a msg.answer

    "%(name) 900 IN A %(ip)"
    "%(in_addr_arpa) 900 IN PTR %(name)."

    for direct and reverse zones.

    qstate alse shall be updated with information about return message before you manipulate with UNBOUND cache

    msg.set_return_msg(qstate)

    9. Summary.

    Well, now you know enough on how to integrate information from your Zabbix instance into your enterprise.