Home Posts Projects Contact
Home Posts Projects Contact

How I Host This Site

How I Host This Site

This is Part 2 in a series where I describe how this site is built and hosted. If you haven't read Part 1 yet, I encourage you to check it out here: Part 1 - How I Built This Site

In this part of the series, I'm going to cover everything that happens from the time I commit & push my code to the repository up to the point it's live on the site.

Around the same time I was working on building this site, I was also tinkering with setting up a homelab server. I had already had several Raspberry Pis running various self-hosted applications, primarily Home Assistant, along with a media server, and Pi-Hole. I also had an older desktop PC that didn't get much use anymore, and decided turn it into a dedicated home server to host all of these various services.

I started by installing ProxMox Virtual Environment on the server. From there, I was easily able to spin-up virtual machines or linux containers. I setup a dedicated Home Assistant virtual machine, along with a linux container running Dockge to manage my various docker-compose-based services. With this, I was determined to get my newly-rebuilt personal site up & running as a containerized application on my home server.

I knew I would need a MySQL instance, along with a Redis cache, so I started by setting up containers for each of them in Dockge. I also setup OneDev as an all-in-one self-hosted DevOps platform. OneDev provides a git repository, bug tracking, task management, CI/CD build pipelines, and more. It's a fantastic piece of software.

To containerize the application, I've setup the following Dockerfile:

FROM php:8.2-cli

ENV COMPOSER_ALLOW_SUPERUSER=1

RUN apt-get update -y && apt-get install -y libmcrypt-dev git libzip-dev libicu-dev

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install intl pdo pdo_mysql zip exif

WORKDIR /app
COPY . /app

RUN composer install

EXPOSE 8000
CMD php artisan serve --host=0.0.0.0 --port=8000

I realize this wouldn't be suitable for a Production environment; but I'm a firm believe of YAGNI, and this works for my personal site that generates a small volume of traffic.

In OneDev, I use a build pipeline to build and publish the Docker image of my repository when commits are made to the main branch. OneDev offers a UI-based pipeline builder, with several pre-made build-step components; or you can roll-your-own by defining your own .onedev-buildspec.yml file with your own build configuration. Here's an example of what I'm using:

version: 33
jobs:
- name: Build and Publish Image
  steps:
  - !CheckoutStep
    name: Checkout Code
    cloneCredential: !DefaultCredential {}
    withLfs: false
    withSubmodules: false
    condition: ALL_PREVIOUS_STEPS_WERE_SUCCESSFUL
  - !CommandStep
    name: Composer Install
    runInContainer: true
    image: php:8.2-cli
    interpreter: !DefaultInterpreter
      commands: |
        ENV COMPOSER_ALLOW_SUPERUSER=1
        apt-get update -y && apt-get install -y libmcrypt-dev git libzip-dev libicu-dev
        curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
        docker-php-ext-install intl pdo pdo_mysql zip exif
        composer install --no-dev
    useTTY: true
    condition: ALL_PREVIOUS_STEPS_WERE_SUCCESSFUL
  - !CommandStep
    name: Build App
    runInContainer: true
    image: node:latest
    interpreter: !DefaultInterpreter
      commands: |
        set -e
        apt update
        apt install -y jq
        npm ci
        npm run build
        rm -rf node_modules
    useTTY: true
    condition: ALL_PREVIOUS_STEPS_WERE_SUCCESSFUL
  - !BuildImageStep
    name: Build & Publish Docker Image
    output: !RegistryOutput
      tags: kylekanderson/portfolio:latest
    platforms: linux/amd64
    condition: ALL_PREVIOUS_STEPS_WERE_SUCCESSFUL
  triggers:
  - !BranchUpdateTrigger {}
  retryCondition: never
  maxRetries: 3
  retryDelay: 30
  timeout: 3600

There's a lot there, but it's really 4 simple steps:

  1. Checkout the code

  2. Setup the build machine dependencies and run a composer install

  3. Bundle the app assets with npm run build

  4. Build and publish the Docker image to a private repository on Docker Hub.

At this point, I don't have a means of automatically updating the Docker image running in Dockge. So, once the app is built and published to Docker Hub, I will manually pull the update into Dockge. That's a fairly quick and painless process though.

One other thing I hadn't mentioned yet - storage of user-uploaded assets (and by that, I mostly mean files that I upload to the site via the admin panel). If I were running the app in a more standard server configuration where I had access to a permanent file system, I could simple store these assets directly in the server's file system. However, in a containerized application with ephemeral file storage, I chose to use MinIO as a self-hosted S3-compatible file storage option. Because MinIO is S3-compatible, I was able to use Laravel's S3 storage driver by just swapping in my MinIO credentials in place of AWS credentials, and everything worked as expected.

And those are the basics of how the site is hosted. There are a few other things happening behind-the-scenes that I may write about in a Part 3. Stay tuned for that, or more rambling thoughts on other related topics in the coming days and weeks.