Author: bandel

  • Which Laravel authentication package to choose?

    At its core, Laravel provides “guards” and “providers” to manage authentication. However, controllers, routes, and views still need to be implemented.

    To achieve this, Laravel offers first-party packages that offer a complete authentication system, ready to use in just a few minutes. This is unique and the strength of Laravel: in no time, you can start developing your ideas.

    However, multiple packages are related to authentication, and it can be challenging to know which one is adequate for your project.

    Should you use Laravel Jetstream or Laravel Breeze? What are the differences between Laravel Passport and Laravel Sanctum? Is Laravel UI still an option?

    To help you find your way around, I have made a diagram to help you understand their differences and make the best choice for your project.

    Which Laravel authentication package to install for your project?

    Laravel Breeze

    Breeze is a minimal and simple implementation of Laravel’s authentication features. It works both for full-stack applications and APIs. This is an excellent choice if you like simplicity or are brand new to Laravel.

    • You can choose between Blade, Vue/React (with Inertia), or API
    • It uses Tailwind CSS when Blade or Inertia stacks are chosen
    • ✅ Registration
    • ✅ Login
    • ✅ Profile Management
    • ✅ Password Reset
    • ✅ Email Verification

    Laravel Jetstream

    More complete and stylish, Jetstream is an interesting alternative to Breeze. It provides more features but requires that your project uses Livewire or Inertia.

    • You can choose between Blade + Livewire or Vue/React + Inertia
    • It uses Tailwind CSS
    • ✅ Registration
    • ✅ Login
    • ✅ Profile Management
    • ✅ Password Reset
    • ✅ Email Verification
    • ✅ Two-Factor Authentication (2FA)
    • ✅ Teams Management
    • ✅ Browser Sessions Management (let users see where they’re logged-in)
    • ✅ API Tokens & Permissions (let users generate API tokens)

    Laravel Fortify

    Fortify is a front-end agnostic implementation of all authentication features. It provides all routes and controllers needed to implement your authentication logic but requires you to code the user interface yourself. There are no views out of the box!

    This is a great choice if you don’t use Tailwind CSS, want to code your front end yourself, or are building an API.

    • ❌ No views, no user interface
    • ✅ Registration
    • ✅ Login
    • ✅ Profile Management
    • ✅ Password Reset
    • ✅ Email Verification
    • ✅ Two-Factor Authentication (2FA)
    • ✅ Teams Management
    • ✅ Browser Sessions Management (let users see where they’re connected and logout sessions)
    • ✅ API Tokens & Permissions (let users generate API tokens)

    For info, Jetstream uses Fortify under the hood and adds the UI layer.


    Laravel UI

    Laravel UI is the legacy scaffolding and brings a basic authentication system built on the Bootstrap CSS framework. Today, the only reason to install it is that your project uses Bootstrap CSS.

    • You can choose between simple HTML, Vue, or React
    • ✅ Registration
    • ✅ Login
    • ✅ Password Reset
    • ✅ Email Verification

    Laravel Passport

    Passport provides a full OAuth2 server implementation


    Laravel Sanctum

    Sanctum offers a simple way to authenticate SPAs or mobile applications that need to communicate with your Laravel-powered API. If you don’t need full OAuth2 support, this is a much simpler alternative to Passport.

    • ✅ Middleware to authenticate your SPA (using cookie-based sessions)
    • ✅ Middleware to authenticate clients using API tokens
    • ✅ API tokens generation & permissions
    • ❌ No authentication routes or controllers

    Using Sanctum, you still have to implement your own authentication logic (creating your routes and controllers) or use it in combination with Fortify.

    If you use it to authenticate a SPA, it needs to be hosted on the same root domain as your API since it uses cookie-based sessions.


    I hope that all the authentication packages and starter kits of Laravel have no more secrets for you and that this article has made your choice easier if you were hesitating between them.

    👋 I offer tech consulting services for companies that need help with their Laravel applications. I can assist with upgrades, refactoring, testing, new features, or building new apps. Feel free to contact me through LinkedIn, or you can find my email on my GitHub profile.

    sumber: https://medium.com/@antoine.lame/which-laravel-authentication-package-to-choose-290551a82a44

  • Querying Whois information with PHP

    The wonderful world of Whois!, if you do not know what it is or what it is for, this is probably not the publication for you. But if you know what I’m talking about, then you’re going to be excited to know that you can stop using third-party services to check this information, with a little effort and love for programming you can create your own service! (I’m trying to add excitement to a query for information that, as necessary as it is, is quite common.)

    Come on!, join me again to walk the yellow brick path… a path full of wonders!.

    What is Whois?

    Whois is a public service that allows you to check who owns a domain, its status, expiration dates, renewal and other information of interest.

    You can use a Whois query to determine if a domain is available for purchase, for example. (Warning: Your mileage may vary, whois is generally quite reliable but can produce results you don’t expect), use it with caution.

    Where can you find the Whois servers of a TLD?

    IANA (the entity in charge of regulating the allocation of IP addresses and root DNS for address resolution) has a list at the following link:

    Root Zone Database

    The Root Zone Database represents the delegation details of top-level domains, including gTLDs such as .com, and…
    www.iana.org

    This list will help us later to scrape and extract information necessary for our PHP Whois query script.

    Warning: Scraping itself is not bad, but misused it ruins everyone’s party, always be very responsible and respectful regarding the information you grab using this technique.

    Structure of our solution

    Our directory and solution files will be as follows:

    /whois           # root directory (your web server must have access to it)
      /cache         # This directory will store json files that contain the whois
                     # server address for a given TLD, these files will be used to catch
                     # the scraping process results.
      - index.html   # This file will be our entry point and GUI interface to query
                     # whois information.
      - getwhois.php # This script will receive the whois request from index.html
                     # and return the result.
      - Whois.php    # Our class definition with all attributes and methods used
                     # to query whois information.

    Remember that you must have a web development environment with PHP or at least that you can execute php scripts through a command console. This time we will use PHP 8.2 on an Apache web server.

    We will write our code based on three large blocks as follows:

    • Scraping
    • Whois request/query
    • Interface and result presentation

    1. Scraping

    Let’s do some scraping to extract the Whois server from each domain type (TLD) available in https://www.iana.org/domains/root/db. We will do it “on demand”, we will only scrape as necessary, that is, when we do not have a previous cache file for the TLD, this way we will reduce traffic, respect the site where the information comes from and reduce response times while querying.

    For example, we will visit the information of the TLD “.com”, the URL is https://www.iana.org/domains/root/db/com.html, general contact information will be shown and, at the bottom, the Whois server for this type of domains, like this:

    iana.org whois

    The address next to the text “WHOIS Server” will be the data of interest for our scrape process.

    The first step will be to make a request to the website containing the required information and capture the HTML of the response. We can do this with our dear friend cURL like this:

        /**
         * This function downloads HTML content from a URL
         *
         * @param string $url URL to be queried
         *
         * @return string|bool HTML received in response from the website
         * if an error occurs it will return false
         */
        function curlDownload(string $url): string|bool
        {
    
            $curl = curl_init();
    
            curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0);
            curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 0);
            curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
            curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 60);
            curl_setopt($curl, CURLOPT_HEADER, 0);
            curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "GET");
            curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36");
            curl_setopt($curl, CURLOPT_URL, $url);
    
            $html = curl_exec($curl);
            curl_close($curl);
    
            return $html;
        }

    In the User-Agent header, we set a value that simulates a visit from a desktop browser.

    Now that we have got the content of the site, it is necessary to extract from the HTML received the address of the whois server, this can be done with a very useful tool, a tool that generates panic or immense hatred in the depths of the brave ones that dare to look it straight into its eyes, regular expressions.

    No matter how you feel about it, always try to remember “Macho man” Randy Savage wise words:

    You may not like it, but accept it! — “Macho Man” Randy Savage wisdom

    We are going to review the content and create the regular expression, keep in mind that we are not going to cover in detail how it works, we will cover what is important, that it works for our present objective.

    Let’s look at the source code of the page where the name of the whois server is located. The section we are interested in has this structure:

        <p>
    
            <b>URL for registration services:</b> <a href="http://www.verisigninc.com">http://www.verisigninc.com</a><br/>
    
    
            <b>WHOIS Server:</b> whois.verisign-grs.com
    
        </p>

    Now let’s create a regex to extract just the text “whois.verisign-grs.com”, it will be something like this:

    $this->regexWhoisServer = '#(?s)(?<=\<b\>WHOIS Server\:\<\/b\>)(.+?)(?=\<\/p\>)#';

    This expression looks for the text between the text pattern “<b>WHOIS Server:</b>” and the first match with the text “<p>”, using PHP we can capture the returned value and save it in a variable to use it later in our whois query.

    Our sample code to understand this concept will look like this:

    //Array that will be used to store the regex results.
    $matchesWhois = array();
    
    //Now we use our HTML download function created previously.
    $html = curlDownload("https://www.iana.org/domains/root/db/com.html");
    //Use the regex expression to extract the whois server from the HTML.
    $resWhois = preg_match("#(?s)(?<=\<b\>WHOIS Server\:\<\/b\>)(.+?)(?=\<\/p\>)#", $html, $matchesWhois, PREG_OFFSET_CAPTURE);
    
    //Now we remove the blank spaces from the result text, stored at the [0][0]
    //element in the array
    $matchesWhois[0][0] = trim($matchesWhois[0][0]);
    
    //Finally assignt the whois server name to a variable.
    $whoisServer = $matchesWhois[0][0];

    2. Whois request/query

    We already managed to query the name of the server to which we will request the whois information, now we need to implement the code to perform the query, for this we open a connection through sockets and send the domain name to receive a response with the whois data, like this:

    //Open a connection to the whois server on port 43 with 20 seconds timeout limit.
    $whoisSock = @fsockopen("whois.verisign-grs.com", 43, $errno, $errstr, 20);
    //This variable will be used to store the whois result.
    $whoisQueryResult = "";
    
    //Send the domain name ending with new line.
    fputs($whoisSock, "mytestdomain.com" . "\r\n");
    
    $content = "";
    
    //Read the server response.
    while (!feof($whoisSock)) {
        $content .= fgets($whoisSock);
    }
    
    //Close the socket.
    fclose($whoisSock);
    
    //Convert the string to an array (one element for each line on the string)
    $arrResponseLines = explode("\n", $content);
    
    foreach ($arrResponseLines as $line) {
    
        $line = trim($line);
    
        //ignore if the line is empty or if it begins with "#" or "%"
        if (($line != '') && (!str_starts_with($line, '#')) && (!str_starts_with($line, '%'))) {
            //Append the line to the result variable.
            $whoisQueryResult .= $line . PHP_EOL;
        }
    }
    
    //Show the result.
    echo $whoisQueryResult;

    Now that we have the result, we will generate the code to query and display the result on the web.

    3. Interface and result presentation

    To query and show results we will create the Whois class that integrates the concepts previously shown, a file to receive query requests and the web interface to display the results.

    Let’s start with our class, we will call it Whois.php and it has the following structure:

    <?php
    
    class Whois
    {
        //Regex matches
        private array $matchesWhois;
        //Cache files path
        private string $_CACHE_PATH;
        //Regex used to detect the whois server while scraping the TLD URL
        private string $regexWhoisServer;
        //Cache files extension (.json)
        private string $_FILE_EXT;
        //Flag, True = using cache file, False = scraped result
        private bool $usingCache;
        //Domain name to being used to query the whois info.
        private string $domain;
        //Domain TLD
        private string $tld;
        //Cache file name
        private string $cacheFile;
        //URL used to scrape the whois server to be used.
        private string $urlWhoisDB;
        //Array that will contain the answer and errors generated during the whois query
        private array $response;
        //Array, contains the whois server address
        private array $whoisInfo;
        //Tag to be replaced by the domain TLD extracted from the domain name.
        private string $tldUrlTag;
        //Whois port, default 43
        private int $_WHOIS_PORT;
        //Whois query timeout in seconds, default 20
        private int $_WHOIS_TIMEOUT;
        //User Agent to be used to scrape the whois server info.
        private string $_CURL_USER_AGENT;
    
    
        /**
         * Class constructor
         */
        public function __construct()
        {
            $this->matchesWhois = array();
            $this->whoisInfo = array();
            $this->_CACHE_PATH = __DIR__ . "/cache/";
            $this->regexWhoisServer = '#(?s)(?<=\<b\>WHOIS Server\:\<\/b\>)(.+?)(?=\<\/p\>)#';
            $this->_FILE_EXT = ".json";
            $this->usingCache = false;
            $this->domain = "";
            $this->tld = "";
            $this->cacheFile = "";
            $this->tldUrlTag = "[TLD]";
            $this->urlWhoisDB = "https://www.iana.org/domains/root/db/{$this->tldUrlTag}.html";
            $this->response = array();
            $this->_WHOIS_PORT = 43;
            $this->_WHOIS_TIMEOUT = 20;
            $this->_CURL_USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36";
        }
    
    
        /**
         * Domain validation
         *
         * @param string $domain domain name to be validated i.e. "google.com"
         *
         * @return bool
         */
        public function isDomain(string $domain): bool
        {
            return filter_var($domain, FILTER_VALIDATE_DOMAIN);
        }
    
    
        /**
         * Extracts the TLD from the domain name
         *
         * @param mixed $domain domain name
         *
         * @return string
         */
        private function extractTld($domain): string
        {
            $arrDomain = explode(".", $domain);
    
            return end($arrDomain);
        }
    
        /**
         * Sets the cache filename for a given TLD, it also checks if the file exists and loads its content
         *
         * @param mixed $tld domain (TLD), i.e. "com", "net", "org".
         *
         * @return void
         */
        private function setCacheFileName($tld): void
        {
            $this->cacheFile = $this->_CACHE_PATH . $tld . $this->_FILE_EXT;
    
            if (file_exists($this->cacheFile)) {
    
                $tmpCache = file_get_contents($this->cacheFile);
                $this->whoisInfo = json_decode($tmpCache, true);
    
                $this->usingCache = true;
            }
        }
    
        /**
         * This function can be used to check if there where errors during the process
         *
         * @return bool true = there are errors, false = no errors
         */
        public function hasErrors(): bool
        {
            return isset($this->response["errors"]);
        }
    
        /**
         * Returns the response received including erros (if any).
         * @param bool $json Allows to select the response format, false = array, true = json
         *
         * @return array|string
         */
        public function getResponse(bool $json = false): array|string
        {
            return ($json) ? json_encode($this->response) : $this->response;
        }
    
        /**
         * This function downloads and returns the HTML returned from a URL
         *
         * @param string $url URL adddress
         *
         * @return string|bool string containing the HTML received, if there is an error return false.
         */
        private function curlDownload(string $url): string|bool
        {
    
            $curl = curl_init();
    
            curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0);
            curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 0);
            curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
            curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 60);
            curl_setopt($curl, CURLOPT_HEADER, 0);
            curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "GET");
            curl_setopt($curl, CURLOPT_USERAGENT, $this->_CURL_USER_AGENT);
            curl_setopt($curl, CURLOPT_URL, $url);
    
            $html = curl_exec($curl);
            curl_close($curl);
    
            return $html;
        }
    
        /**
         * Whois query entry point.
         *
         * @param string $domain domain name for which you want to check the whois
         *
         * @return void
         */
        public function getWhoisServerDetails(string $domain): void
        {
    
            $this->domain = $domain;
            $this->tld = $this->extractTld($domain);
            $this->setCacheFileName($this->tld);
    
            if (!$this->usingCache) {
    
                $urlWhoisDB = str_replace($this->tldUrlTag, $this->tld, $this->urlWhoisDB);
                $html = $this->curlDownload($urlWhoisDB);
    
                $resWhois = preg_match($this->regexWhoisServer, $html, $this->matchesWhois, PREG_OFFSET_CAPTURE);
    
                if ($resWhois != 1) {
    
                    $this->response["errors"][] = array(
                        "error" => "TLD '{$this->tld}' not found!",
                        "domain" => $domain
                    );
    
                    return;
                }
    
                $this->matchesWhois[0][0] = trim($this->matchesWhois[0][0]);
                $this->whoisInfo["whois"] = $this->matchesWhois[0][0];
    
                file_put_contents($this->_CACHE_PATH . $this->tld . $this->_FILE_EXT, json_encode($this->whoisInfo, JSON_UNESCAPED_UNICODE));
            }
    
            if (!isset($this->whoisInfo["whois"])) {
    
                $this->response["errors"][] = array(
                    "error" => "WhoIs Server for TLD {$this->tld} not found!.",
                    "domain" => $domain
                );
    
                return;
            }
    
            $whoisSock = @fsockopen($this->whoisInfo["whois"], $this->_WHOIS_PORT, $errno, $errstr, $this->_WHOIS_TIMEOUT);
            $whoisQueryResult = "";
    
            if (!$whoisSock) {
    
                $this->response["errors"][] = array(
                    "error" => "{$errstr} ({$errno})",
                    "domain" => $domain
                );
    
                return;
            }
    
            fputs($whoisSock, $this->domain . "\r\n");
    
            $content = "";
    
            while (!feof($whoisSock)) {
                $content .= fgets($whoisSock);
            }
    
            fclose($whoisSock);
    
            if ((strpos(strtolower($content), "error") === false) && (strpos(strtolower($content), "not allocated") === false)) {
    
                $arrResponseLines = explode("\n", $content);
    
                foreach ($arrResponseLines as $line) {
    
                    $line = trim($line);
    
                    if (($line != '') && (!str_starts_with($line, '#')) && (!str_starts_with($line, '%'))) {
                        $whoisQueryResult .= $line . PHP_EOL;
                    }
                }
            }
    
            $this->response["whoisinfo"] = $whoisQueryResult;
        }
    }

    In this class we will have all the main functions for scrape, whois server name extraction, domain name processing and final whois query. Note that error checking and handling routines were added, as well as functions to automate scraping from the domain type (TLD) queried and cache strategy to avoid making more queries than necessary to iana.org.

    Now let’s create our file that will receive query requests, call it getwhois.php and it will have the following content:

    <?php
    
    //Include the class definition.
    require("Whois.php");
    
    //Decode the parameters received.
    $paramsFetch = json_decode(
        file_get_contents("php://input"),
        true
    );
    
    //Create our whois object
    $whoisObj = new Whois();
    //Query the whois information
    $whoisObj->getWhoisServerDetails($paramsFetch["domain"]);
    
    //Return the response as JSON
    echo $whoisObj->getResponse(true);
    exit;

    It is quite simple, it includes our Whois.php class, captures the parameters received from an HTML form that uses the javascript fetch function to send the request with the domain name, creates an instance of our class and makes the whois information query, then returns the result in JSON format and finishes the execution.

    Now let’s go to the index.html file, this will be our graphical interface and the access point to the query and visualization of results. I use Bulma CSS for html controls an styling, it’s pretty straightforward, not intrusive, and you can generate results quickly. The file will look like this:

    <?php
    
    //Include the class definition.
    require("Whois.php");
    
    //Decode the parameters received.
    $paramsFetch = json_decode(
        file_get_contents("php://input"),
        true
    );
    
    //Create our whois object
    $whoisObj = new Whois();
    //Query the whois information
    $whoisObj->getWhoisServerDetails($paramsFetch["domain"]);
    
    //Return the response as JSON
    echo $whoisObj->getResponse(true);
    exit;

    It is quite simple, it includes our Whois.php class, captures the parameters received from an HTML form that uses the javascript fetch function to send the request with the domain name, creates an instance of our class and makes the whois information query, then returns the result in JSON format and finishes the execution.

    Now let’s go to the index.html file, this will be our graphical interface and the access point to the query and visualization of results. I use Bulma CSS for html controls an styling, it’s pretty straightforward, not intrusive, and you can generate results quickly. The file will look like this:

    <!DOCTYPE html>
    <html>
    
    <head>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <title>Whois</title>
        <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css">
        <script type="module">
            window.addEventListener('load', (event) => {
                //Event Handler for the search button.
                document.querySelector(".search").addEventListener('click', (event) => {
                    //Show that the query is executing
                    event.currentTarget.classList.add('is-loading');
                    //disable the button
                    event.currentTarget.disabled = true;
    
                    //Hide the result sections.
                    document.querySelector(".result").parentElement.classList.add("is-hidden");
                    document.querySelector(".error").parentElement.classList.add("is-hidden");
    
                    //Prepare the payload
                    const payload = JSON.stringify({
                        "domain": document.querySelector(".domain").value
                    });
    
                    //Send the request to getwhois.php
                    fetch('getwhois.php', {
                        method: 'POST',
                        headers: {
                            'Content-Type': 'application/json',
                        },
                        body: payload,
                    })
                        .then(response => response.json())
                        .then(data => {
                            //Process the response.
                            if (data.errors != undefined) {
                                document.querySelector(".error").parentElement.classList.remove("is-hidden");
    
                                for (const item in data.errors) {
                                    document.querySelector(".error").innerText = data.errors[item].error + "\n";
                                }
    
                            } else {
    
                                document.querySelector(".result").parentElement.classList.remove("is-hidden");
                                document.querySelector(".result").innerText = data.whoisinfo;
                            }
    
                        })
                        .catch((error) => {
                            document.querySelector(".error").parentElement.classList.remove("is-hidden");
                            document.querySelector(".error").innerText = error;
                            console.error('Error:', error);
                        }).finally(() => {
                            document.querySelector(".search").classList.remove('is-loading');
                            document.querySelector(".search").disabled = false;
                        });
                });
    
            });
        </script>
    </head>
    
    <body>
        <section class="section">
            <div class="container">
                <div class="columns">
                    <div class="column">
                        <div class="columns">
                            <div class="column"></div>
                            <div class="column has-text-centered">
                                <h1 class="title">
                                    WhoIs Lookup
                                </h1>
                            </div>
                            <div class="column"></div>
                        </div>
                        <div class="columns">
                            <div class="column"></div>
                            <div class="column has-text-centered">
                                <div class="field is-grouped is-grouped-centered">
                                    <div class="control">
                                        <input class="input domain" type="text" placeholder="Domain">
                                    </div>
                                    <div class="control">
                                        <button class="button is-info search">
                                            Search
                                        </button>
                                    </div>
                                </div>
                            </div>
                            <div class="column"></div>
                        </div>
                    </div>
                </div>
                <div class="columns box is-hidden">
                    <div class="column result"></div>
                </div>
                <div class="columns box is-hidden">
                    <div class="column notification is-danger error has-text-centered">
                    </div>
                </div>
            </div>
        </section>
    </body>
    
    </html>

    Testing

    To perform tests it is only necessary point your browser to the path where our scripts are located, in my case http://localhost/whois, it will show the field to write the domain name and the “Search” button to request the Whois information. You can try a popular domain like “google.com”, the result will look like this:

    After a successful whois query you will notice that in the /cache directory, a file will be created with the TLD, for example “com.json” and will contain the name of the corresponding whois server, this will allow us to avoid scraping again.

    And that’s all, never forget that the one who is brave is really free, drink plenty of water, exercise and get enough sleep.

    At Winkhosting.co we are much more than hosting. If you need shared hosting, domains or servers, stop by to visit us.

    source: https://medium.com/winkhosting/querying-whois-information-with-php-f686baee8c7

  • A Complete Guidebook on Starting Your Own Homelab for Data Analysis

    A Complete Guidebook on Starting Your Own Homelab for Data Analysis

    There has never been a better time to start your data science homelab for analyzing data useful to you, storing important information, or developing your own tech skills.

    Will Keefe Published in Towards Data Science

    There’s an expression I’ve read on Reddit a few times now in varying tech-focused subreddits that is along the lines of “Paying for cloud services is just renting someone else’s computer.” While I do think cloud computing and storage can be extremely useful, this article will focus on some of the reasons why I’ve moved my analyses, data stores, and tools away from the online providers, and into my home office. A link to the tools and hardware I used to do this is available as well.

    Introduction

    The best way to start explaining the method to my madness is by sharing a business problem I ran into. While I’m a fairly traditional investor with a low-risk tolerance, there is a small hope inside of me that maybe, just maybe, I can be one of the <1% to beat the S&P 500. Note I used the word “hope”, and us such, do not put too much on the line in this hope. A few times a year I’ll give my Robinhood account $100 and treat it with as much regard as I treat a lottery ticket — hoping to break it big. I will put the adults in the room at ease though by sharing that this account is separate from my larger accounts that are mostly based on index funds with regular modest returns with a few value stocks I sell covered calls on a rolling basis with. My Robinhood account however is borderline degenerate gambling, and anything goes. I have a few rules for myself though:

    1. I never take out any margin.
    2. I never sell uncovered, only buy to open.
    3. I don’t throw money at chasing losing trades.

    You may wonder where I’m going with this, and I’ll pull back from my tangent by sharing that my “lottery tickets” that have, alas, not earned me a Jeff-Bezos-worthy yacht yet, but have taught me a good bit about risk and loss. These lessons have also inspired the data enthusiast inside of me to try to improve the way I quantify risk and attempt to anticipate market trends and events. Even models directionally correct in the short term can provide tremendous value to investors — retail and hedge alike.

    The first step I saw toward improving my decision-making was to have data available to make data-driven decisions. Removing emotion from investing is a well-known success tip. While historical data is widely available for stocks and ETFs and is open-sourced through resources such as yfinance (an example of mine is below), derivative historical datasets are much more expensive and difficult to come by. Some initial glances at the APIs available provided hints that regular, routine access to data to backtest strategies for my portfolio could cost me hundreds of dollars annually, and possibly even monthly depending on the granularity I was seeking.

    I decided I’d rather invest in myself in this process, and spend $100’s of dollars on my own terms instead. *audience groans*

    Building on the Cloud

    My first thoughts on data scraping and warehousing led me to the same tools I use daily in my work. I created a personal AWS account, and wrote Python scripts to deploy on Lambda to scrape free, live option datasets at predetermined intervals and write the data on my behalf. This was a fully automated system, and near-infinitely scalable because a different scraper would be dynamically spun up for every ticker in my portfolio. Writing the data was more challenging, and I was nestled between two routes. I could either write the data to S3, crawl it with Glue, and analyze it with serverless querying in Athena, or I could use a relational database service and directly write my data from Lambda to the RDS.

    A quick breakdown of AWS tools mentioned:

    Lambda is serverless computing allowing users to execute scripts without much overhead and with a very generous free tier.

    S3, aka simple storage service, is an object storage system with a sizable free tier and extremely cost-effective storage at $0.02 per GB per month.

    Glue is an AWS data prep, integration, and ETL tool with web crawlers available for reading and interpreting tabular data.

    Athena is a serverless query architecture.

    I ended up leaning toward RDS just to have the data easily queryable and monitorable, if for no other reason. They also had a free tier available of 750 hours free as well as 20 GB of storage, giving me a nice sandbox to get my hands dirty in.

    Little did I realize, however, how large stock options data is. I began to write about 100 MB of data per ticker per month at 15-minute intervals, which may not sound like much, but considering I have a portfolio of 20 tickers, before the end of the year I would have used all of the entirety of the free tier. On top of that, the small compute capacity within the free tier was quickly eaten up, and my server ate through all 750 hours before I knew it (considering I wanted to track options trades for roughly 8 hours a day, 5 days a week). I also frequently would read and analyze data after work at my day job, which led to greater usage as well. After about two months I finished the free tier allotment and received my first AWS bill: about $60 a month. Keep in mind, once the free tier ends, you’re paying for every server hour of processing, an amount per GB out of the AWS ecosystem to my local dev machine, and a storage cost in GB/month. I anticipated within a month or two my costs of ownership could increase by at least 50% if not more, and continue so on.

    Yikes.

    Leaving the Cloud

    At this point, I realized how I’d rather be taking that $60 a month I am spending renting equipment from Amazon, and spend it on electric bills and throwing what is left over into my Robinhood account, back where we started. As much as I love using AWS tools, when my employer isn’t footing the bill (and to my coworkers reading this, I promise I’m frugal at work too), I really don’t have much interest in investing in them. AWS just is not priced at the point for hobbyists. They give plenty of great free resources to learn to noobies, and great bang for your buck professionally, but not at this current in-between level.

    I had an old Lenovo Y50–70 laptop from prior to college with a broken screen that I thought I’d repurpose as a home web scraping bot and SQL server. While they still can fetch a decent price new or certified refurbished (likely due to the i7 processor and dedicated graphics card), my broken screen pretty much totaled the value of the computer, and so hooking it up as a server breathed fresh life into it, and about three years of dust out of it. I set it up in the corner of my living room on top of a speaker (next to a gnome) and across from my PlayStation and set it to “always on” to fulfill its new purpose. My girlfriend even said the obnoxious red backlight of the computer keys even “pulled the room together” for what it’s worth.

    Gnome pictured, but at the time photo was taken, the server was not yet configured.

    Conveniently my 65″ Call-of-Duty-playable-certified TV was within HDMI cable distance to the laptop to actually see the code I was writing too.

    I migrated my server from the cloud to my janky laptop and was off to the races! I could now perform all of the analysis I wanted at just the cost of electricity, or around $0.14/kWh, or around $0.20–0.30 a day. For another month or two, I tinkered and tooled around locally. Typically this would look like a few hours a week after work of opening up my MacBook, playing around with ML models with data from my gnome-speaker-server, visualizing data on local Plotly dashboards, and then directing my Robinhood investments.

    I experienced some limited success. I’ll save the details for another Medium post once I have more data and performance metrics to share, but I decided I wanted to expand from a broken laptop to my own micro cloud. This time, not rented, but owned.

    Building the Home Lab

    “Home Lab” is a name that sounds really complicated and cool *pushes up glasses*, but is actually relatively straightforward when deconstructed. Basically, there were a few challenges I was looking to address with my broken laptop setup that provided motivation, as well as new goals and nice-to-haves that provided inspiration.

    Broken laptop problems:

    The hard drive was old, at least 5 or 6 years old, which posed a risk to potential future data loss. It also slowed down significantly under duress with larger queries, a noted problem with the model.

    Having to use my TV and Bluetooth keyboard to use my laptop with Windows 10 Home installed was very inconvenient, and not ergonomically friendly.

    The laptop was not upgradeable in the event I wanted to add more RAM beyond what I had already installed.

    The technology was limited in parallelizing tasks.

    The laptop alone was not strong enough to host my SQL server as well as dashboards and crunching numbers for my ML models. Nor would I feel comfortable sharing the resources on the same computer, shooting the other services in the feet.

    A system I would put into place had to solve each of these problems, but there were also new features I’d like to achieve too.

    Planned New Features:

    A new home office setup to make working from home from time to time more comfortable.

    Ethernet wiring throughout my entire apartment (if I’m paying for the whole gigabit, I’m going to use the whole gigabit AT&T).

    Distributed computing* with microservers where appropriate.

    Servers would be capable of being upgraded and swapped out.

    Varying programs and software deployable to achieve different subgoals independently and without impeding current or parallel programs.

    *Distributed computing with the computers I chose is a debated topic that will be explained later in the article.

    I spent a good amount of time conducting research on appropriate hardware configurations. One of my favorite resources I read was “Project TinyMiniMicro”, which compared the Lenovo ThinkCentre Tiny platform, the HP ProDesk/EliteDesk Mini Platform, and the Dell OptiPlex Micro platform. I too have used single-board computers before like the authors of Project TMM, and have two Raspberry Pis and an Odroid XU4.

    What I liked about my Pis:

    They were small, ate little power, and the new models have 8GB of RAM.

    What I liked about my Odroid XU4:

    It is small, has 8 cores, and is a great emulation platform.

    While I’m sure my SBCs will still find a home in my homelab, remember, I need equipment that handles the services I want to host. I also ended up purchasing probably the most expensive Amazon order of my entire life and completely redid my entire office. My shopping cart included:

    • Multiple Cat6 Ethernet Cables
    • RJ45 Crimp Tool
    • Zip ties
    • 2 EliteDesk 800 G1 i5 Minis (but was sent G2 #Win)
    • 1 EliteDesk 800 G4 i7 Mini (and sent an even better i7 processor #Win)
    • 2 ProDesk 600 G3 i5 Minis (and send sent a slightly worse i5 #Karma)
    • Extra RAM
    • Multiple SSDs
    • A new office desk to replace my credenza/runner
    • New office lighting
    • Hard drive cloning equipment
    • Two 8-Port Network Switches
    • An Uninterruptible Power Supply
    • A Printer
    • A Mechanical Keyboard (Related, I also have five keyboard and mice combos from the computers if anyone wants one)
    • Two new monitors

    If you’d like to see my entire parts list with links to each item to check it out or two make a purchase for yourself, feel free to head over to my website for a complete list.

    Once my Christmas-in-the-Summer arrived with a whole slew of boxes on my doorstep, the real fun could begin. The first step was finishing wiring my ethernet throughout my home. The installers had not connected any ethernet cables to the cable box by default, so I had to cut the ends and install the jacks myself. Fortunately, the AWESOME toolkit I purchased (link on my site) included the crimp tool, the RJ45 ends, and testing equipment to ensure I wired the ends right and to identify which port around my apartment correlated to which wire. Of course, with my luck, the very last of 8 wires ended up being the one I needed for my office, but the future tenants of my place will benefit from my good deed for the day I guess. The entire process took around 2–3 hours of wiring the gigabit connections but fortunately, my girlfriend enjoyed helping and a glass of wine made it go by faster.

    Following wired networking, I began to set up my office by building the furniture, installing the lighting, and unpacking the hardware. My desk setup turned out pretty clean, and I’m happy with how my office now looks.

    Before and After

    As for my hardware setup, each of the computers I purchased had 16GB of RAM I upgraded to 32 as well as Solid State Drives (a few I upgraded). Since every device is running Windows 10 Pro, I am able to remote login in my network as well and I set up some of my service already. Networking the devices was quite fun as well, although I think my cable management leaves a little room for improvement.

    Front of Home Lab Nodes
    Back of Home Lab Nodes

    Now per the asterisk I had in the beginning, why did I spend around a year’s worth of AWS costs on five computers with like 22 cores total rather than just buy/build a tricked-out modern PC? Well, there are a few reasons, and I’m sure this may be divisive with some of the other tech geeks in the room.

    1. Scalability — I can easily add another node to my cluster here or remove one for maintenance/upgrades.
    2. Cost — It is easy and cheap to upgrade and provide maintenance. Additionally, at around 35W max for most units, the cost of running my servers is very affordable.
    3. Redundancy — If one node goes down (ie, a CPU dies), I have correcting scripts to balance my distributed workloads.
    4. Education — I am learning a significant amount that furthers my professional skills and experience, and education is ✨invaluable✨.
    5. It looks cool. Point number 5 here should be enough justification alone.

    Speaking of education though, here are some of the things I learned and implemented in my cluster:

    • When cloning drives from smaller to larger, you will need to extend the new drive’s volumes which frequently requires 3rd party software to do easily (such as Paragon).
    • You need to manually assign static IPs to get reliable results when remoting between desktops.
    • When migrating SQL servers, restoring from a backup is easier than querying between two different servers.

    I’m sure there will be many more lessons I will learn along the way…

    Below is an approximate diagram of my home network now. Not pictured are my wifi devices such as my MacBook and phone, but they jump between the two routers pictured. Eventually, I will also be adding my single-board computers and possibly one more PC to the cluster. Oh yeah, and my old broken-screen-laptop? Nobody wanted to buy it on Facebook Marketplace for even $50 so I installed Windows 10 Pro on it for remote access and added it to the cluster too for good measure, and that actually could be a good thing because I can use its GPU to assist in building Tensorflow models (and play a few turn-based games as well).

    Home Lab Network Diagram

    Speaking of Tensorflow, here are some of the services and functions I will be implementing in my new home lab:

    • The SQL server (currently hosting my financial datasets, as well as new datasets I am web scraping and will later write about including my alma mater’s finances and the city I am living in’s public safety datasets)
    • Docker (for hosting apps/containers I will be building as well as a Minecraft server, because, why not)
    • Jenkins CI/CD system to build, train, and deploy Machine Learning models on my datasets
    • Git Repo for my personal codebase
    • Network Attached Storage supporting my many photos from my photography hobby, documents, and any other data-hoarding activities
    • And other TBD projects/services

    Closing Thoughts:

    Was it worth it? Well, there is an element of “only time will tell”. Once my credit card cools off from my Amazon fulfillment purchases I’m sure it will enjoy the reprieve from AWS pricing as well. I am also looking forward to being able to build and deploy more of my hobbies, as well as collect more data to write more Medium articles about. Some of my next few planned articles include an analysis of the debt West Virginia University is currently facing financially as well as an exploratory data analysis of Nashville’s public safety reporting (and possibly an ML model for anticipating emergency events and allocating resource needs). These data science projects are large enough that they would not be possible without some sort of architecture for storing and querying the massive amount of related data.

    What do you think? Does leaving the cloud and building a home lab sound like a project you would want to do? What would your hardware choice be?

    If you’re curious about the hardware I used, check out my reviews at www.willkeefe.com

    Some of my related recent Medium content:

    Production Planning and Resource Management of Manufacturing Systems in Python

    Efficient supply chains, production planning, and resource allocation management are more important than ever. Python…

    towardsdatascience.com

    Crime Location Analysis and Prediction Using Python and Machine Learning

    Using Python, Folium, and ScyPy, models can be built to illustrate crime incidents, calculate the best locations for…

    towardsdatascience.com

    Data Science

    Data

    Programming

    Serverless

    Homelab

    Will Keefe

    Written by Will Keefe

    ·Writer for

    Towards Data Science

    Engineer, python enthusiast, and fintech hobbyist.

    Crime Location Analysis and Prediction Using Python and Machine Learning

    Using Python, Folium, and ScyPy, models can be built to illustrate crime incidents, calculate the best locations for safety event resource…

    How I Turned My Company’s Docs into a Searchable Database with OpenAI

    And how you can do the same with your docs

    Tabulating Subtotals Dynamically in Python Pandas Pivot Tables

    One of the current disadvantages of the Pandas library’s built-in pivot functions is the lack of gathering subtotals dynamically for…

    System Design Blueprint: The Ultimate Guide

    Developing a robust, scalable, and efficient system can be daunting. However, understanding the key concepts and components can make the…

    🐼Introducing PandasAI: The Generative AI Python Library 🐼

    Pandas AI is an additional Python library that enhances Pandas, the widely-used data analysis and manipulation tool, by incorporating…

    The Right Way to Run Shell Commands From Python

    These are all the options you have in Python for running other processes — the bad, the good, and most importantly, the right way to do it

  • How to test Laravel with Sanctum API using the Postman

    In the last part, we completed the Laravel Breeze API installation and validated the API using the Breeze Next front end.

    In this blog, we going to test the API using the Postman application.

    About Postman

    Postman is software used to test the API by sending and receiving the request with multiple data formats along with auth.

    Postman is an API platform for building and using APIs. Postman simplifies each step of the API lifecycle and streamlines collaboration so you can create better APIs — faster.

    Install Postman

    Click here and complete your Postman installation. After installation opens the Postman application.

    Create a new Postman Collection

    Click the create new button

    In the popup window click the collections

    Enter the name “Laravel Admin API” and select auth type that is No auth.


    Pre-request Script

    In the Laravel Sanctum, we used SPA authentication. So it works by using Laravel’s built-in cookie-based session authentication services. So, we need to set the cookie for all the requests in Postman.

    We can set the cookie by using the Postman Pre-request Script. Add the below code to Pre-request Script.

    pm.sendRequest({
        url: pm.collectionVariables.get('base_url')+'/sanctum/csrf-cookie',
        method: 'GET'
    }, function (error, response, {cookies}) {
        if (!error){
            pm.collectionVariables.set('xsrf-cookie', cookies.get('XSRF-TOKEN'))
        }
    })

    In this script, we used some variables. We will create variables in the next step.


    Postman Variables

    Add the host, base_url, and xsrf-cookie variables in the Postman variables section


    Postman Add Request

    Click the “Add a request” link and create a new request for registration.

    In the header section add the “Accept” and “X-XSRF-TOKEN” like below

    Also, you can add plain text values by clicking the “Bulk Edit”

    Accept:application/json
    X-XSRF-TOKEN:{{xsrf-cookie}}

    In the request Body, add the below values on form-data

    name:admin
    email:user1@admin.com
    password:password
    password_confirmation:password

    Register API request

    Click the “send” button

    You will get an empty response if the user is registered successfully


    Get User API request

    Now we going to create an API to get the current user details. Click and create a New Request with the get method.

    The API URL is /api/user and also add the below headers.

    Accept:application/json
    Referer:{{host}}

    For this request, the body is none, and then click send the request. You will get the current user details in the response.


    Logout Request

    Create the logout request with /logout URL with the post method. Also, add the headers.

    Accept:application/json
    X-XSRF-TOKEN:{{xsrf-cookie}}

    You will get an empty response after sending the request.


    Login Request

    We have completed the user register, get the user, and logout. Only login is pending.

    Header

    Accept:application/json
    X-XSRF-TOKEN:{{xsrf-cookie}}

    Body: Select form-data and insert the below values

    email:user1@admin.com
    password:password

    We have created 4 requests in Postman and validated our Admin API. You can import the below-exported data and use it in the Postman. Next part we add permission and roles to our admin API.

    {
     "info": {
      "_postman_id": "6822504e-2244-46f9-bba8-115dc36644f6",
      "name": "Laravel Admin API",
      "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json",
      "_exporter_id": "25059912"
     },
     "item": [
      {
       "name": "Register",
       "request": {
        "method": "POST",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "X-XSRF-TOKEN",
          "value": "{{xsrf-cookie}}",
          "type": "text"
         }
        ],
        "body": {
         "mode": "formdata",
         "formdata": [
          {
           "key": "name",
           "value": "admin",
           "type": "text"
          },
          {
           "key": "email",
           "value": "user1@admin.com",
           "type": "text"
          },
          {
           "key": "password",
           "value": "password",
           "type": "text"
          },
          {
           "key": "password_confirmation",
           "value": "password",
           "type": "text"
          }
         ]
        },
        "url": {
         "raw": "{{base_url}}/register",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "register"
         ]
        }
       },
       "response": []
      },
      {
       "name": "User",
       "request": {
        "method": "GET",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "Referer",
          "value": "{{host}}",
          "type": "text"
         }
        ],
        "url": {
         "raw": "{{base_url}}/api/user",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "api",
          "user"
         ]
        }
       },
       "response": []
      },
      {
       "name": "Logout",
       "request": {
        "method": "POST",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "X-XSRF-TOKEN",
          "value": "{{xsrf-cookie}}",
          "type": "text"
         }
        ],
        "url": {
         "raw": "{{base_url}}/logout",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "logout"
         ]
        }
       },
       "response": []
      },
      {
       "name": "Login",
       "request": {
        "method": "POST",
        "header": [
         {
          "key": "Accept",
          "value": "application/json",
          "type": "text"
         },
         {
          "key": "X-XSRF-TOKEN",
          "value": "{{xsrf-cookie}}",
          "type": "text"
         }
        ],
        "body": {
         "mode": "formdata",
         "formdata": [
          {
           "key": "email",
           "value": "user1@admin.com",
           "type": "text"
          },
          {
           "key": "password",
           "value": "password",
           "type": "text"
          }
         ]
        },
        "url": {
         "raw": "{{base_url}}/login",
         "host": [
          "{{base_url}}"
         ],
         "path": [
          "login"
         ]
        }
       },
       "response": []
      }
     ],
     "event": [
      {
       "listen": "prerequest",
       "script": {
        "type": "text/javascript",
        "exec": [
         "pm.sendRequest({",
         "    url: pm.collectionVariables.get('base_url')+'/sanctum/csrf-cookie',",
         "    method: 'GET'",
         "}, function (error, response, {cookies}) {",
         "    if (!error){",
         "        pm.collectionVariables.set('xsrf-cookie', cookies.get('XSRF-TOKEN'))",
         "    }",
         "})"
        ]
       }
      },
      {
       "listen": "test",
       "script": {
        "type": "text/javascript",
        "exec": [
         ""
        ]
       }
      }
     ],
     "variable": [
      {
       "key": "host",
       "value": "localhost:3000",
       "type": "string"
      },
      {
       "key": "base_url",
       "value": "http://localhost",
       "type": "string"
      },
      {
       "key": "xsrf-cookie",
       "value": "",
       "type": "string"
      }
     ]
    }
    

    Also, all the Request is available in below Postman public workspace

    https://www.postman.com/balajidharma/workspace/laravel-admin-api/collection/25059912-6822504e-2244-46f9-bba8-115dc36644f6?action=share&creator=25059912

  • Add Role and Permissions based authentication to Laravel API

    To manage roles & permission, we going to add the Spatie Laravel-permission package to our Laravel Admin API.

    The following steps are involved to install the Laravel permission package for our Laravel Admin API.

    • Install Spatie Laravel-permission package
    • Publish the configuration and migration file
    • Running Migration

    Install Spatie Laravel-permission package

    Install the package using the composer command

    ./vendor/bin/sail composer require spatie/laravel-permission

    Publish the configuration and migration file

    The vendor:publish artisan command is used to publish the package configuration to the config folder. Also, copy the migration files to the migration folder.

    ./vendor/bin/sail artisan vendor:publish --provider="Spatie\Permission\PermissionServiceProvider"

    Running Migration

    Run the migrations using artisan migrate

    ./vendor/bin/sail artisan migrate

    Now we need to add some roles & permission. Then need to assign the role to users. So we need to create seeders.

    I created an Admin core package with seeders and common functionality when I was working on Basic Laravel Admin Panel & Laravel Vue admin panel

    Add the admin core package to our Admin API

    ./vendor/bin/sail composer require balajidharma/laravel-admin-core

    This admin core package will install the Laravel Menu package. So run the below publish commands

    ./vendor/bin/sail artisan vendor:publish --provider="BalajiDharma\LaravelAdminCore\AdminCoreServiceProvider"
    ./vendor/bin/sail artisan vendor:publish --provider="BalajiDharma\LaravelMenu\MenuServiceProvider"

    Now run the migration with the seeder

    ./vendor/bin/sail artisan migrate --seed --seeder=AdminCoreSeeder

    The seeder throws the error

    We need to add HasRoles Traits in the user model. Open the app/Models/User.php

    <?php
    
    .
    .
    .
    .
    .
    use Spatie\Permission\Traits\HasRoles;
    
    class User extends Authenticatable
    {
        use HasApiTokens, HasFactory, Notifiable, HasRoles;
    
        /**
         * The attributes that are mass assignable.
         *

    Try again to run the seeder with migrate:fresh. So it will drop all tables and re-run all of our migrations.

    ./vendor/bin/sail artisan migrate:fresh --seed --seeder=AdminCoreSeeder

    Open the Postman application and test the new user login. In the login, change the form data to the below email and password

    Email — superadmin@example.com

    Password — password

    After login, runs the get user request. You will get the super admin details on the response.


    We will create an API for Permission CRUD operations in the next blog.

  • How to Upgrade From Laravel 9.x to Laravel 10.x

    The Laravel 10 has been released on Feb 14. Laravel 10 requires a minimum PHP version of 8.1. Read more about the release on the Laravel release notes.

    Our Basic Laravel Admin Panel currently has Laravel 9.x, So time to upgrade to Laravel 10.

    Laravel Upgrade From 9.x to 10.x

    Laravel upgrade involved the following steps.

    • Update PHP version
    • Composer version update
    • Update Composer Dependencies
    • Update composer Minimum Stability
    • Update Docker composer

    All the upgrade steps are available on the official Laravel document.


    Update PHP version

    Laravel 10 requires PHP 8.1.0 or greater. So update your PHP version. If you using the PHP version below 8.1.

    Now we will check out the Admin Panel PHP version. The PHP version will display the Admin panel or Laravel default home page

    You can also check the PHP version & Laravel versions in the command line busing below the command

    PHP version

    ./vendor/bin/sail php -v
    
    // or
    
    ./vendor/bin/sail php --version
    
    //If you not using sail
    php -v

    Laravel Version

    ./vendor/bin/sail artisan -v
    
    //or
    
    ./vendor/bin/sail artisan --version
    
    //If you not using sail
    php artisan --version

    Also, you can check the Laravel version on the ./vendor/laravel/framework/src/Illuminate/Foundation/Application.php file.

    Our Laravel Admin Panel is using the Laravel sail (Docker development environment). So we need to update the PHP in the docker-compose.yml file. We update it at the end of the step.


    Composer version update

    Laravel 10 requires Composer 2.2.0 or greater. If you using a lower version, uninstall and install a new version.

    You can check your composer version using the below commands

    composer -v
    
    composer -vvv about

    if you using the sail try below

    ./vendor/bin/sail composer -v
    
    ./vendor/bin/sail composer -vvv about

    We already have the composer version above 2.2.0.


    Update Composer Dependencies

    For Laravel 10, we need to update the following dependencies in our application’s composer.json file

    • laravel/framework to ^10.0
    • spatie/laravel-ignition to ^2.0
    • php to ^8.1

    Admin Panel updated below following dependencies

    diff --git a/composer.json b/composer.json
    index 381f15d..b0be0bc 100644
    --- a/composer.json
    +++ b/composer.json
    @@ -5,12 +5,12 @@
         "keywords": ["framework", "laravel", "boilerplate", "admin panel"],
         "license": "MIT",
         "require": {
    -        "php": "^8.0.2",
    +        "php": "^8.1",
             "balajidharma/laravel-admin-core": "^1.0",
             "guzzlehttp/guzzle": "^7.2",
    -        "laravel/framework": "^9.19",
    -        "laravel/sanctum": "^2.14.1",
    -        "laravel/tinker": "^2.7",
    +        "laravel/framework": "^10.0",
    +        "laravel/sanctum": "^3.2",
    +        "laravel/tinker": "^2.8",
             "spatie/laravel-permission": "^5.5"
         },
         "require-dev": {
    @@ -19,11 +19,11 @@
             "laravel/breeze": "^1.7",
             "laravel/dusk": "^7.1",
             "laravel/pint": "^1.0",
    -        "laravel/sail": "^1.0.1",
    +        "laravel/sail": "^1.18",
             "mockery/mockery": "^1.4.4",
    -        "nunomaduro/collision": "^6.1",
    -        "phpunit/phpunit": "^9.5.10",
    -        "spatie/laravel-ignition": "^1.0"
    +        "nunomaduro/collision": "^7.0",
    +        "phpunit/phpunit": "^10.0",
    +        "spatie/laravel-ignition": "^2.0"
         },
         "autoload": {
             "psr-4": {

    Update composer Minimum Stability

    One more change on the composer file. The minimum-stability setting needs to updatestable

    "minimum-stability": "stable",

    After the composer changes do the composer update

    ./vendor/bin/sail composer update

    Now open the application home page.

    If you need an updated welcome page means, copy the https://raw.githubusercontent.com/laravel/laravel/10.x/resources/views/welcome.blade.php and update the resources/views/welcome.blade.php


    Update Docker composer

    We going to update docker-compose.yml with the latest changes on Laravel.

    The latest Laravel sail is using PHP version 8.2. find below the final version of docker-compose.yml

    # For more information: https://laravel.com/docs/sail
    version: '3'
    services:
        laravel.test:
            build:
                context: ./vendor/laravel/sail/runtimes/8.2
                dockerfile: Dockerfile
                args:
                    WWWGROUP: '${WWWGROUP}'
            image: sail-8.2/app
            extra_hosts:
                - 'host.docker.internal:host-gateway'
            ports:
                - '${APP_PORT:-80}:80'
                - '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
            environment:
                WWWUSER: '${WWWUSER}'
                LARAVEL_SAIL: 1
                XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
                XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
            volumes:
                - '.:/var/www/html'
            networks:
                - sail
            depends_on:
                - mysql
                - redis
                - meilisearch
                - mailpit
                - selenium
        mysql:
            image: 'mysql/mysql-server:8.0'
            ports:
                - '${FORWARD_DB_PORT:-3306}:3306'
            environment:
                MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
                MYSQL_ROOT_HOST: "%"
                MYSQL_DATABASE: '${DB_DATABASE}'
                MYSQL_USER: '${DB_USERNAME}'
                MYSQL_PASSWORD: '${DB_PASSWORD}'
                MYSQL_ALLOW_EMPTY_PASSWORD: 1
            volumes:
                - 'sail-mysql:/var/lib/mysql'
            networks:
                - sail
            healthcheck:
                test:
                    - CMD
                    - mysqladmin
                    - ping
                    - '-p${DB_PASSWORD}'
                retries: 3
                timeout: 5s
        redis:
            image: 'redis:alpine'
            ports:
                - '${FORWARD_REDIS_PORT:-6379}:6379'
            volumes:
                - 'sail-redis:/data'
            networks:
                - sail
            healthcheck:
                test:
                    - CMD
                    - redis-cli
                    - ping
                retries: 3
                timeout: 5s
        meilisearch:
            image: 'getmeili/meilisearch:latest'
            ports:
                - '${FORWARD_MEILISEARCH_PORT:-7700}:7700'
            volumes:
                - 'sail-meilisearch:/meili_data'
            networks:
                - sail
            healthcheck:
                test:
                    - CMD
                    - wget
                    - '--no-verbose'
                    - '--spider'
                    - 'http://localhost:7700/health'
                retries: 3
                timeout: 5s
        mailpit:
            image: 'axllent/mailpit:latest'
            ports:
                - '${FORWARD_MAILPIT_PORT:-1025}:1025'
                - '${FORWARD_MAILPIT_DASHBOARD_PORT:-8025}:8025'
            networks:
                - sail
        selenium:
            image: 'selenium/standalone-chrome'
            extra_hosts:
                - 'host.docker.internal:host-gateway'
            volumes:
                - '/dev/shm:/dev/shm'
            networks:
                - sail
        phpmyadmin:
            image: phpmyadmin/phpmyadmin
            links:
                - mysql:mysql
            ports:
                - 8080:80
            environment:
                MYSQL_USERNAME: "${DB_USERNAME}"
                MYSQL_ROOT_PASSWORD: "${DB_PASSWORD}"
                PMA_HOST: mysql
            networks:
                - sail
    networks:
        sail:
            driver: bridge
    volumes:
        sail-mysql:
            driver: local
        sail-redis:
            driver: local
        sail-meilisearch:
            driver: local

    We have successfully upgraded our Admin Panel to Laravel 10.x


  • Laravel: Automate Code Formatting!

    Pint is one the newest members of Laravel first-party packages and will help us to have more readable and consistent codes.

    Installing and Configuring Laravel Pint is so easy and It is built on top of PHP-CS-Fixer so it has tones of rules to fix code style issues. (You don’t need Laravel 9 to use Pint and it’s a zero dependency package)

    But running Pint is quite painful because every time we want to push our changes to the remote repository we have to run below command manually:

    ./vendor/bin/pint --dirty

    The --dirty flag will run PHP-CS-Fixer for changed files only. If we want to check styles for all files just remove --dirty flag.

    In this article we want to simply automate running code styles check with Pint before committing any changed file so even team developers will have a well defined code structure and don’t need to run Laravel Pint every time before we push our codes to remote repo!

    Before we start, be careful this is a very simple setup and you can add as many options as you want to Laravel Pint.

    In order to run ./vendor/bin/pint --dirty just before every commit, we should use the pre-commit hook inside .git folder.

    First of all we will create a scripts folder inside our root Laravel directory. In this folder we will have a setup.sh file and pre-commit file without any extension.

    scripts/
    setup.sh
    pre-commit

    Inside our setup.sh we have:

    #! /usr/bin/env bash
    
    cp scripts/pre-commit .git/hooks/pre-commit
    chmod +x .git/hooks/pre-commit

    And write the following lines on pre-commit file:

    #! /usr/bin/env bash
    
    echo "Check php code styles..."
    echo "Running PHP cs-fixer"
     ./vendor/bin/pint --dirty
     git add .
    echo "Done!"

    Second of all, we should go to composer.json file and on the scripts object add this line: (If post-install-cmd key does not exist, you should create post-install-cmd part and then add below)

    "post-install-cmd": [
                "bash scripts/setup.sh"
            ]

    Third of all, we will require Pint package by this:

    composer require laravel/pint --dev

    And To be sure Don’t Forget to run:

    composer install

    The composer install command will add the pre-commit hook to our .git folder and after that we are ready to go!

    From now on, we can simply write our code and just before we commit our changes the Pint command will run automatically and will fix our code styles!

    Pint use Laravel code styles as defaultbut if you want to use psr-12 like me, you can create a pint.json file inside the root directory of your Laravel project and copy below json to have a more opinionated PHP code styles:

    {
        "preset": "psr12",
        "rules": {
            "simplified_null_return": true,
            "blank_line_before_statement": {
                "statements": ["return", "try"]
            },
            "binary_operator_spaces": {
                "operators": {
                    "=>": "align_single_space_minimal"
                }
            },
            "trim_array_spaces": false,
            "new_with_braces": {
                "anonymous_class": false
            }
        }
    }

    This is a simple config for our Pint command and will simplify null returns and define an equal indentation for arrays. You can check all PHP-CS-Fixer options here!

    READ MORE:

  • Laravel works with Large database records using the chunk method

    Your application database records will increase by every day. As a developer, we faced performance and server memory issues when working with large table records. In this blog, we going to process the large table records and explain the importance of the Eloquent chunk method.

    We need a demo application to work with large records.

    Laravel Installation

    As usual, we going to install Basic Laravel Admin Panel locally. This Basic admin comes with users with roles and permissions.

    The Basic Laravel Admin Panel is based on Laravel Sail. What is Sail? Sail is a built-in solution for running your Laravel project using Docker.

    Refer to the https://github.com/balajidharma/basic-laravel-admin-panel#installation step and complete the installation.


    Demo data

    For demo records, we going to create dummy users on the user’s table using the Laravel seeder. To generate a seeder, execute the make:seeder Artisan command.

    ./vendor/bin/sail php artisan make:seeder UserSeeder
    
    INFO Seeder [database/seeders/UserSeeder.php] created successfully.

    Open the generated seeder file located on database/seeders/UserSeeder.php and update with the below code.

    <?php
    namespace Database\Seeders;
    use Illuminate\Database\Seeder;
    use Illuminate\Support\Facades\DB;
    use Illuminate\Support\Facades\Hash;
    use Illuminate\Support\Str;
    class UserSeeder extends Seeder
    {
        /**
         * Run the database seeds.
         *
         * @return void
         */
        public function run()
        {
            for ($i=0; $i < 1000; $i++) {
                DB::table('users')->insert([
                    'name' => Str::random(10),
                    'email' => Str::random(10).'@gmail.com',
                    'password' => Hash::make('password'),
                ]);
            }
        }
    }

    Now run the seeder using the below Artisan command. It will take extra time to complete the seeding.

    ./vendor/bin/sail php artisan db:seed --class=UserSeeder

    After the Artisan command, verify the created users on the user list page http://localhost/admin/user


    Processing large records

    Now we going to process the large user records. Assume we need to send black Friday offers notifications emails to all the users. Usually, we generate new Artisan command and send the email by using the scheduler job.

    Memory issue

    We will fetch all the users and send emails inside each loop.

    $users = User::all();
    $users->each(function ($user, $key) {
        echo $user->name;
    });

    If you have millions of records or if your result collection has a lot of relation data means, your server will throw the Allowed memory size of bytes exhausted error.

    To overcome this issue we will process the limited data by saving the limit in the database or cache.

    Example: First time we fetch the 100 records and save the 100 on the database table.
    Next time fetch 100 to 200 records and save the 200 in the database. So this method involved additional fetch and update. Also, we need to stop the job once processed all the records.

    Laravel provides the inbuild solution of the Eloquent chunk method to process the large records


    Laravel Eloquent chunk method

    The Laravel ELoquent check method retrieves a small chunk of results at a time and feeds each chunk into a Closure for processing.

    User::chunk(100, function ($users) {
        foreach ($users as $user) {
            echo $user->name;
        }
    });

    Understand the chunk method

    I will create one function in the user controller and explain the check method detailed.

    Open the routes/admin.php and add the below route

    Route::get('send_emails', 'UserController@sendEmails');

    Now open the app/Http/Controllers/Admin/UserController.php and add the sendEmails method.

    Without chunk:
    After adding the below code open the http://localhost/admin/send_emails page

    public function sendEmails()
    {
        $users = User::all();
        $users->each(function ($user, $key) {
            echo $user->name;
        });
    }

    Open the Laravel Debugbar queries panel. The select * from users will fetch all the 1000+ records.

    With chunk method:
    Replace the same function with the below code and check the page in the browser.

    public function sendEmails()
    {
        User::chunk(100, function ($users) {
            foreach ($users as $user) {
                echo $user->name;
            }
        });
    }

    The chunk method adds limits and processes all the records. So if using chunk, it processes 100 records collection at the time. So no more memory issues.


    What is chunkById?

    This chunkById the method will automatically paginate the results based on the record’s primary key. To understand it, again update the sendEmails the method with the below code

    public function sendEmails()
    {
        User::chunkById(100, function ($users) {
            foreach ($users as $user) {
                echo $user->name;
            }
        });
    }

    Now user id is added on where condition along with the limit of 100.

    // chunkById
    select * from `users` where `id` > 100 order by `id` asc limit 100
    select * from `users` where `id` > 200 order by `id` asc limit 100
    select * from `users` where `id` > 300 order by `id` asc limit 100
    // chunk
    select * from `users` order by `users`.`id` asc limit 100 offset 0
    select * from `users` order by `users`.`id` asc limit 100 offset 100
    select * from `users` order by `users`.`id` asc limit 100 offset 200

    This chunkById is recommended when updating or deleting records inside the closure (in the loop).


    Conclusion

    The Eloquent chunk method is a very useful method when you work with large records. Also, read about the collection check method.

  • Restructuring a Laravel controller using Services & Action Classes

    Laravel Refactoring — Laravel creates an admin panel from scratch — Part 11

    In the previous part, we moved the UserController store method validation to Form Request. In this part, we going to explore and use the new trending Actions and Services Classes.

    We going to cover the below topic in the blog

    • Laravel project structure
    • Controller Refactoring
    • Service Class
      • What is Service Class
      • Implement Service Class
    • Action Class
      • Implement Action Class
    • Advantages of Services & Action Classes
    • Disadvantages of Services & Action Classes
    • Conclusion

    Laravel project structure

    Laravel does not restrict your project structure also they do not suggest any project structure. So, you have the freedom to choose your project structure.

    Laravel gives you the flexibility to choose the structure yourself

    We will explore both Services & Action Classes and we use these classes in our Laravel basic admin panel.

    Controller Refactoring

    The UserController the store function does the below 3 actions.

    public function store(StoreUserRequest  $request)
    {
        // 1.Create a user
        $user = User::create([
            'name' => $request->name,
            'email' => $request->email,
            'password' => Hash::make($request->password)
        ]);
        // 2.Assign role to user
        if(! empty($request->roles)) {
            $user->assignRole($request->roles);
        }
        // 3.Redirect with message
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }
    

    To further refactor, we can move the logic to another class method. This new class is called Services & Action Classes. We will see them one by one.


    Services Class

    We decided to move the logic to another class. The Laravel best practices are suggested to move business logic from controllers to service classes due to the Single-responsibility principle (SRP). The Service class is just a common PHP class to put all our logic.

    What is Service Class

    A service is a very simple class and it is not extended with any class. So, it is just a standalone PHP class.

    We going to create a new app/Services/Admin/UserService.php service class with the createUser method. This is a custom PHP class in Laravel, so no artisan command. We need to create it manually.

    Implement Service Class

    app/Services/Admin/UserService.php

    <?php
    namespace App\Services\Admin;
    
    use App\Models\User;
    use Illuminate\Support\Facades\Hash;
    
    class UserService
    {
        public function createUser($data): User
        {
            $user = User::create([
                'name' => $data->name,
                'email' => $data->email,
                'password' => Hash::make($data->password),
            ]);
    
            if(! empty($data->roles)) {
                $user->assignRole($data->roles);
            }
    
            return $user;
        }
    }

    Then, in the UserController call this method. For the Automatic Injection, you may type-hint the dependency in the controller.

    Blog Updated: Earlier I passed the $request (function createUser(Request $request)) directly to the service class. The service can use by other methods. So $request is converted to an object and passed as params.

    app/Http/Controllers/Admin/UserController.php

    use App\Services\Admin\UserService;
    public function store(StoreUserRequest $request, UserService $userService)
    {
        $userService->createUser((object) $request->all());
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }

    We can do some more refactoring on UserService Class by moving the user role saving to the new method.

    app/Services/Admin/UserService.php

    class UserService
    {
        public function createUser($data): User
        {
            $user = User::create([
                'name' => $data->name,
                'email' => $data->email,
                'password' => Hash::make($data->password),
            ]);
            return $user;
        }
        public function assignRole($data, User $user): void
        {
            $roles = $data->roles ?? [];
            $user->assignRole($roles);
        }
    }

    app/Http/Controllers/Admin/UserController.php

    public function store(StoreUserRequest $request, UserService $userService)
    {
        $data = (object) $request->all();
        $user = $userService->createUser($data);
        $userService->assignRole($data, $user);
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }

    Now we implemented the Service class. We will discuss the benefit at the end of the blog.

    Click here to view examples of service classes used on Laravel


    Action Class

    In the Laravel community, the concept of Action classes got very popular in recent years. An action is a very simple PHP class similar to the Service class. But Action class only has one public method execute or handle Else you could name that method whatever you want.

    Implement Action Class

    We going to create a new app/Actions/Admin/User/CreateUser.php Action class with the single handle method.

    app/Actions/Admin/User/CreateUser.php

    <?php
    
    namespace App\Actions\Admin\User;
    
    use App\Models\User;
    use Illuminate\Support\Facades\Hash;
    
    class CreateUser
    {
        public function handle($data): User
        {
            $user = User::create([
                'name' => $data->name,
                'email' => $data->email,
                'password' => Hash::make($data->password),
            ]);
    
            $roles = $data->roles ?? [];
            $user->assignRole($roles);
    
            return $user;
        }
    }

    Now call this handle method on UserController. The method injection to resolve CreateUser.

    app/Http/Controllers/Admin/UserController.php

    public function store(StoreUserRequest $request, CreateUser $createUser)
    {
        $createUser->handle((object) $request->all());
        return redirect()->route('user.index')
                        ->with('message','User created successfully.');
    }

    The biggest advantage of this Action class we don’t worry about the function name. Because it should always single function like handle


    Advantages of Services & Action Classes

    • Code reusability: We can call the method on the Artisan command and also easy to call other controllers.
    • Single-responsibility principle (SRP): Achieved SRP by using Services & Action Classes
    • Avoid Conflict: Easy to manage code for larger applications with a large development team.

    Disadvantages of Services & Action Classes

    • Too many classes: We need to create too many classes for single functionality
    • Small Application: Not recommended for smaller applications

    Conclusion

    As said earlier, Laravel gives you the flexibility to choose the structure yourself. The Services and Action classes are one of the structure methods. It should be recommended for large-scale applications to avoid conflict and do faster releases.

    For the Laravel Basic Admin Panel, I am going with the Actions classes.

    The Laravel admin panel is available at https://github.com/balajidharma/basic-laravel-admin-panel. Install the admin panel and share your feedback.

    Thank you for reading.

    Stay tuned for more!

    Follow me at balajidharma.medium.com.


    References

    https://freek.dev/1371-refactoring-to-actions
    https://laravel-news.com/controller-refactor
    https://farhan.dev/tutorial/laravel-service-classes-explained/
  • Resize ext4 file system

    Using Growpart

    $ growpart /dev/sda 1
    CHANGED: partition=1 start=2048 old: size=39999455 end=40001503 new: size=80000991,end=80003039
    $ resize2fs /dev/sda1
    resize2fs 1.45.4 (23-Sep-2019)
    Filesystem at /dev/sda1 is mounted on /; on-line resizing required
    old_desc_blocks = 3, new_desc_blocks = 5
    The filesystem on /dev/sda1 is now 10000123 (4k) blocks long.

    Using Parted & resize2fs

    apt-get -y install parted
    parted /dev/vda unit s print all # print current data for a case
    parted /dev/vda resizepart 2 yes -- -1s # resize /dev/vda2 first
    parted /dev/vda resizepart 5 yes -- -1s # resize /dev/vda5
    partprobe /dev/vda # re-read partition table
    resize2fs /dev/vda5 # get your space

    Parted doesn’t work on ext4 on Centos. I had to use fdisk to delete and recreate the partition, which (I validated) works without losing data. I followed the steps at http://geekpeek.net/resize-filesystem-fdisk-resize2fs/. Here they are, in a nutshell:

    $ sudo fdisk /dev/sdx
    > c
    > u
    > p
    > d
    > p
    > w
    $ sudo fdisk /dev/sdx
    > c
    > u
    > p
    > n
    > p
    > 1
    > (default)
    > (default)
    > p
    > w

    sumber: https://serverfault.com/questions/509468/how-to-extend-an-ext4-partition-and-filesystem