This is how I took a Lenovo Legion Y540 and made it a small, reliable home server. Simple steps. Clear commands.
The plan
Use what I already have:
- Lenovo Legion Y540: i5-9300H, 8 GB RAM, 256 GB NVMe + 1 TB HDD, GTX 1650.
- A light Linux with a Wi-Fi icon. I don’t want to fight drivers.
Goals:
- Always on.
- Reachable from anywhere.
- Dev tools ready.
- Terminals and windows come back after reboot.
The OS that didn’t fight me
Ubuntu Server was painful for Wi-Fi.
I moved to Linux Mint Xfce. Installed on the 256 GB SSD. Kept the 1 TB HDD for data.
Mount the big drive, safely
My HDD had two NTFS partitions. I mounted them as /data and /backup without formatting.
sudo apt update && sudo apt install -y ntfs-3g
sudo mkdir -p /data /backup
# find UUIDs
sudo blkid /dev/sda2 /dev/sda3
# add to /etc/fstab (replace UUID=...)
UUID=... /data ntfs defaults,uid=$(id -u),gid=$(id -g),umask=022,noatime,x-systemd.automount,x-systemd.idle-timeout=600 0 0
UUID=... /backup ntfs defaults,uid=$(id -u),gid=$(id -g),umask=022,noatime,x-systemd.automount,x-systemd.idle-timeout=600 0 0
sudo systemctl daemon-reload
sudo mount -a
ln -s /data ~/data
ln -s /backup ~/backup
Make it stay awake
I want a server, not a sleepy laptop.
# ignore lid and any sleep/hibernate
sudo mkdir -p /etc/systemd/logind.conf.d
sudo tee /etc/systemd/logind.conf.d/99-server.conf >/dev/null <<'EOF'
[Login]
HandleLidSwitch=ignore
HandleLidSwitchExternalPower=ignore
HandleLidSwitchDocked=ignore
HandleSuspendKey=ignore
HandleHibernateKey=ignore
IdleAction=ignore
EOF
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
sudo systemctl restart systemd-logind
In BIOS:
- After Power Loss / AC Back → Power On.
- Enable Wake on LAN for Ethernet if you want to wake it on the network.
Keep Wi-Fi from sleeping:
sudo mkdir -p /etc/NetworkManager/conf.d
printf "[connection]\nwifi.powersave = 2\n" | sudo tee /etc/NetworkManager/conf.d/wifi_powersave.conf
sudo systemctl restart NetworkManager
Private network for remote access
Use Tailscale. No router changes. Works from anywhere.
curl -fsSL https://tailscale.com/install.sh | sh
sudo systemctl enable --now tailscaled
sudo tailscale up
OpenSSH and firewall:
sudo systemctl enable --now ssh
sudo ufw allow OpenSSH
sudo ufw allow in on tailscale0
sudo ufw --force enable
Now the box shows in the Tailscale admin with a 100.x.x.x address and a tailnet name.
Shell and sessions that survive reboots
Zsh + Powerlevel10k:
sudo apt install -y zsh git curl fzf zsh-autosuggestions zsh-syntax-highlighting
export RUNZSH=no CHSH=no
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git \
${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
sed -i 's/^ZSH_THEME=.*/ZSH_THEME="powerlevel10k\\/powerlevel10k"/' ~/.zshrc
chsh -s "$(which zsh)" "$USER"
tmux that auto-restores:
sudo apt install -y tmux
git clone https://github.com/tmux-plugins/tpm ~/.tmux/plugins/tpm
cat >> ~/.tmux.conf <<'EOF'
set -g history-limit 100000
set -g default-terminal "tmux-256color"
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'
set -g @resurrect-capture-pane-contents on
set -g @continuum-save-interval 15
set -g @continuum-restore on
run -b '~/.tmux/plugins/tpm/tpm'
EOF
tmux new -d -s setup
# inside tmux later: Ctrl-b then Shift+I to install plugins
# manual save before reboot but can be automated
~/.tmux/plugins/tmux-resurrect/scripts/save.sh
Dev essentials (the stuff I actually use)
Docker engine + Compose v2 + sane logs
sudo apt update
sudo apt install -y docker.io docker-compose-v2
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker
# prevent giant logs
sudo tee /etc/docker/daemon.json >/dev/null <<'EOF'
{ "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } }
EOF
sudo systemctl restart docker
docker run --rm hello-world
Portainer (simple Docker UI)
docker volume create portainer_data
docker run -d \
-p 9000:9000 \
--name portainer \
--restart=unless-stopped \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
Tip: expose it only inside your tailnet:
tailscale serve https / http://localhost:9000
asdf (one tool for runtimes)
git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.14.1
echo '. "$HOME/.asdf/asdf.sh"' >> ~/.zshrc
. "$HOME/.asdf/asdf.sh"
asdf plugin add nodejs https://github.com/asdf-vm/asdf-nodejs.git
asdf plugin add python https://github.com/danhper/asdf-python.git
asdf plugin add golang https://github.com/asdf-community/asdf-golang.git
asdf plugin add rust https://github.com/asdf-community/asdf-rust.git
asdf plugin add java https://github.com/halcyon/asdf-java.git
asdf install nodejs lts && asdf global nodejs lts
asdf install python 3.12.6 && asdf global python 3.12.6
asdf install golang 1.22.6 && asdf global golang 1.22.6
asdf install rust stable && asdf global rust stable
asdf install java temurin-17.0.12+7 && asdf global java temurin-17.0.12+7
Python tools I reach for
python -m pip install --upgrade pip
pip install uv pipx pre-commit ruff black
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc
CLI comfort pack
sudo apt install -y ripgrep fd-find bat eza jq yq tree aria2 neovim gh
echo 'alias fd="fdfind"' >> ~/.zshrc
echo 'alias cat="batcat -pp"' >> ~/.zshrc
echo 'alias ls="eza -l --group-directories-first --git --icons"' >> ~/.zshrc
Kubernetes on the side (optional but handy)
Install kubectl and minikube:
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key \
| sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' \
| sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update && sudo apt-get install -y kubectl
curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64
minikube start --driver=docker --cpus=4 --memory=8192
minikube addons enable ingress
minikube addons enable metrics-server
Day-to-day k8s tools:
# Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# k9s
curl -Lo k9s.tgz https://github.com/derailed/k9s/releases/latest/download/k9s_Linux_amd64.tar.gz
tar -xzf k9s.tgz k9s && sudo install k9s /usr/local/bin/ && rm -f k9s k9s.tgz
# kubectx / kubens
sudo curl -sL https://github.com/ahmetb/kubectx/releases/latest/download/kubectx -o /usr/local/bin/kubectx && sudo chmod +x /usr/local/bin/kubectx
sudo curl -sL https://github.com/ahmetb/kubectx/releases/latest/download/kubens -o /usr/local/bin/kubens && sudo chmod +x /usr/local/bin/kubens
# stern (logs)
sudo curl -L https://github.com/stern/stern/releases/latest/download/stern_linux_amd64 -o /usr/local/bin/stern && sudo chmod +x /usr/local/bin/stern
Databases for local dev
mkdir -p ~/stacks/dev-db && cd ~/stacks/dev-db
cat > docker-compose.yml <<'YAML'
services:
postgres:
image: postgres:16
container_name: pg16
restart: unless-stopped
environment: { POSTGRES_PASSWORD: devpass }
volumes: [ "pgdata:/var/lib/postgresql/data" ]
ports: [ "5432:5432" ]
redis:
image: redis:7
container_name: redis7
restart: unless-stopped
ports: [ "6379:6379" ]
volumes: { pgdata: {} }
YAML
docker compose up -d
Scan, inspect, and manage containers
# lazydocker (TUI)
curl -s https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash
# Trivy (image scanner)
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin
# Dive (layer analyzer)
curl -L https://github.com/wagoodman/dive/releases/latest/download/dive_amd64.deb -o /tmp/dive.deb && sudo dpkg -i /tmp/dive.deb
# skopeo (inspect/pull without Docker)
sudo apt install -y skopeo
Editor choices
- Neovim with a short config, or
- VS Code Remote Tunnel:
Local HTTPS the easy way
Internal UIs and public demos
- Tailscale Serve for private HTTPS to your tailnet:
- ngrok for public demos and webhooks:
Backups and health
restic to /backup/restic with a daily timer.
journald persistent logs with size cap.
zram for better memory use.
netdata for a quick dashboard.
A small weekly script prints SMART status, temps, and docker disk use to the logs.
Right before the bump
Everything is ready. Tailscale is green. Docker and Portainer are up. asdf is set. Minikube works. tmux saves and restores. I sit at my Mac and type:
ssh myusername@jarvis