I wanted to know if there was a neat playbook or tutorial set that one can refer to if they’re trying to set up their own static website from home?

So far I have done the following:

  1. Got a raspberypi (Raspberry Pi 2 Zero W) and raspberrypi OS installed with fail2ban ready.
  2. Installed nginx (I have not configured anything there).
  3. Written the HTML and CSS files for the website.
  4. Purchased a domain.

How do I complete the remain pieces of this puzzle?

My purpose: I want an online profile that I can share with my colleagues and clients instead of relying on LinkedIn as a way to connect. Eventually, I will stop posting on LinkedIn and make this my main method of relaying information and disseminating my works and services.

  • starshipwinepineapple@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    This is something that doesn’t really need to be self hosted unless you’re wanting the experience. You just need:

    1. Static website builder. I use hugo but there’s a few others like jekyll, astro
    2. Use a git forge (github, gitlab, codeberg).
    3. Use your forges Pages feature, there’s also cloudflare pages. Stay away from netlify imo. Each of these you can set up to use your own domain

    So for my website i just write new content, push to my forge, and then a pipeline builds and releases the update on my website.

    Where self hosting comes into play is that it could make some things with static websites easier, like some comment systems, contact forms, etc. But you can still do all of this without self hosting. Comments can be handled through git issues (utteranc.es) and for a contact form i use ‘hero tofu’ free tier. In the end i don’t have to worry about opening access to my ports and can still have a static website with a contact form. All for free outside of cost of domain.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      But if you want to self host, you just need a webserver to serve static files. If you already have other stuff hosted, you probably already have one, so just point it to your HTML files (and potentially generate them with a tool).

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    4 days ago

    The trickier part here his connecting your domain to your raspberry pi and allowing the big internet to access it. You have a few options:

    • Set up dynamic DNS to direct your domain name to your (presumably dynamic) home IP address. Assign the rpi a static IP address on your home network. Forward ports 80 and 443 to that address. The world knows your home IP address, and you’re dependent on your router for security. No spam or DDOS protection.
    • Use a service such as cloudflare tunnel. You’re dependent on cloudflare or whoever, but it’s an easier config, you don’t need to open ports in your firewall, and your home IP address is not public. (I recommend this option.)

    Either way, don’t forget to set up HTTPS. If you aren’t dead-set on using nginx, caddyserver does this entirely automatically.

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    4 days ago

    By default nginx will serve the contents of /var/www/html (a.k.a documentroot) directory regardless of what domain is used to access it. So you could build your static site using the tool of your choice, (hugo, sphinx, jekyll, …), put your index.html and all other files directly under that directory, and access your server at https://ip_address and have your static site served like that.

    Step 2 is to automate the process of rebuilding your site and placing the files under the correct directory with the correct ownership and permissions. A basic shell script will do it.

    Step 3 is to point your domain (DNS record) at your server’s public IP address and forwarding public port 80 to your server’s port 80. From there you will be able to access the site from the internet at http://mydomain.org/

    Step 3 is to configure nginx for proper virtualhost handling (that is, direct requests made for mydomain.org to your site under the /var/www/html/ directory, and all other requests like http://public_ip to a default, blank virtualhost. You may as well use an empty /var/www/html for the default site, and move your static site to a dedicated directory.) This is not a strict requirement, but will help in case you need to host multiple sites, is the best practice, and is a requirement for the following step.

    Step 4 is to setup SSL/TLS certificates to serve your site at https://my_domain (HTTPS). Nowadays this is mostly done using an automatic certificate generation service such as Let’s Encrypt or any other ACME provider. certbot is the most well-known tool to do this (but not necessarily the simplest).

    Step 5 is what you should have done at step 1: harden your server, setup a firewall, fail2ban, SSH keys and anything you can find to make it harder for an attacker to gain write access to your server, or read access to places they shouldn’t be able to read.

    Step 6 is to destroy everything and do it again from scratch. You’ve documented or scripted all the steps, right?

    As for the question “how do I actually implement all this? Which config files and what do I put in them?”, the answer is the same old one: RTFM. Yes, even the boring nginx docs, manpages and 1990’s Linux stuff. Each step will bring its own challenges and teach you a few concepts, one at a time. Reading guides can still be a good start for a quick and dirty setup, and will at least show you what can be done. The first time you do this, it can take a few days/weeks. After a few months of practice you will be able to do all that in less than 10 minutes.

      • vegetaaaaaaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Sometimes you need to understand the basics first. The points I listed are sysadmin 101. If you don’t understand these very basic concepts, there is no chance you will be able to keep any kind of server running, understand how it works, debug certificate problems and so on. Once you’re comfortable with that? Sure, use something “simpler” (a.k.a. another abstraction layer), Caddy is nice. The same point was made in the past about Apache (“just use nginx, it’s simpler”). Meanwhile I still use apache, but if needed I’m able to configure any kind of web server because i taught me the fundamentals.

        At some point we have to refuse the temptation to go the “easy” way when working with complex systems - IT and networking are complex. Just try the hard way first, read the docs, and if it’s too complex/overwhelming/time-consuming, only then go for a more “noob-friendly” solution (I mean we’re on c/selfhosted, why not just buy a commercial NAS or use a hosted service instead? It’s easier). I use firewalld but I learned the basics of iptables a while ago. I don’t build apache from source when I need to upgrade, but I would know how to get 75% there - the docs would teach me the rest.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          I get your point in general, but I think some points are odd.

          For example, Apache overly complicates a simple task. A web server is simple, the only moving parts in a web request are:

          • TLS - mostly just a cert pair and some config if you want to restrict what clients to support (security concerns)
          • HTTP headers
          • URL routing

          You can learn the details of HTTP in about 15 minutes on Wikipedia, whereas you probably won’t get past the introduction in Apache docs in that time. It’s like learning to drive on a big rig with double clutches. Why do that if you don’t need to?

          With a typical self-hosted setup, you can keep it simple and only have your webserver handle the first and pass the rest on to the relevant service. You’re unlikely to need load balancing, malicious request detection, etc, you just need to trunk TLS and route things.

          You’re not gaining anything by learning a complex tool to accomplish a simple task.

          I’m a developer and I’ve written tons of web servers, and I see zero point in apache or even nginx for a home lab setup when I could (and have) write a simple reverse proxy in something like Go in about 30 minutes. It’s easy, handle TLS and HTTP (both built in to standard library), then send it along to the relevant service. It’s probably easier to build that than learn nginx or Apache syntax.

          There’s certainly more to it if you consider high load systems like in an enterprise, but the average home user doesn’t need all that.

          Caddy does everything I need:

          • renew Let’s Encrypt certs
          • proxy based on subdomain

          I’ve done it the hard way, and I don’t feel like I gained anything. Everything involved is simple:

          • TLS - keypair; server needs both, clients just need the pub key
          • HTTP - one line with HTTP verb, URL, and version, then lines with headers as key/value pairs (to route, you only need the URL)
          • renewals - most will copy paste an acme client invocation anyway

          I’ve done it “the hard way” by scripting up cron and configuring Nginx, but what’s the value of learning that when your web-server can do it automatically for you? Gate keeping?

          I agree in general that people should learn how things work. Learn what TLS, HTTP, and whatnot are and do so you can debug stuff. But don’t feel obligated to learn complex software when you just need something simple.

          In other words, YAGNI: You Ain’t Gonna Need It. Or KISS: Keep It Stupid Simple. Don’t copy paste “magic” Apache or Nginx incantations from the internet, use something simple and focus on learning fundamentals.

  • NastyNative@mander.xyz
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    3 days ago

    The big issue with this is opening those ports to the internet. Hosting is so cheap now its not worth the risk in my opinion.

  • Dust0741@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    4 days ago

    I know it’s not self hosting, but I went with a Hugo site hosted on Cloudflare pages. That way I don’t have to port forward or worry about uptime or security.

    • merthyr1831@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 days ago

      You can do the same on github too. It’s pretty seamless in my experience and I dont mind people seeing the source code for my blog

      • tofubl@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 days ago

        You can set up your project in a private repo and in your deploy action push it to the main branch of your public Pages repo. I agree it’s not a huge deal to show the source, but I prefer it like that.

        name: Deploy Hugo site to Github Pages
        
        on:
          push:
            branches:
              - main
            workflow_dispatch:
        
        jobs:
          build:
            runs-on: ubuntu-latest
        
            steps:
              - name: Checkout repository
                uses: actions/checkout@v4
        
              - name: Set up Hugo
                uses: peaceiris/actions-hugo@v3
                with:
                  hugo-version: "0.119.0"
                  extended: true
        
              - name: Build
                run: hugo --minify
        
              - name: Configure Git
                run: |
                  git config --global user.email "[email protected]"
                  git config --global user.name "Your Name"
              - name: Deploy to GitHub Pages
                env:
                  GITHUB_TOKEN: ${{ secrets.DEPLOY_TOKEN }}
                run: |
                  cd public
                  git init
                  git remote add origin https://user/:${{ secrets.DEPLOY_TOKEN }}@github.com/USER/USER.github.io.git
                  git checkout -b main
                  git add .
                  git commit -m "Deploy site"
                  git push -f origin main
        

        edit: Markdown is adding a / after “user” in above git remote command. Don’t know how to get rid of it.

      • Dust0741@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Yup for sure. I specifically have mine open source. I have my domain through Cloudflare so that made sense.

  • Chris@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    7
    ·
    4 days ago

    Just use a GitHub page. Super simple and driven by your source code.