# TuringPi
## Components
- (1) Turing Pi 2 clusterboard
- (1) I/O shield
- (2) Turing Pi 2 CM4 Adapter v1.0
- (2) Raspberry Pi CM4 modules
- (2) Raspberry Pi CM4 heatsinks
- (1) Turing Pi Pico PSU and power supply
- (2) [Turing RK1](https://docs.turingpi.com/docs/turing-rk1-specs-and-io-ports) boards
- (2) RK1 heatsinks
- (1) a Teenage Engineering ITX case, orange
### Extra parts (unused in the build)
- (3) Turing Pi 2 CM4 Adapter v1.0
- (2) Raspberry Pi CM4 heatsinks
## Tasks
- [ ] #project #turingpicluster host Caddy and FoundryVTT from the cluster
- [ ] #project #turingpicluster replace CM4s with two more RK1s (with heatsinks)
- [ ] #project #turingpicluster order more storage
## outline
Follow the [hardware installation guide](https://docs.turingpi.com/docs/turing-pi2-hardware-installation) from Turing Pi.
Flash your nodes with an operating system.
Ensure some system settings on each node before we can really dig in:
1. reserve an IP address in your router settings for each node
2. enable SSH on each node
3. copy your public key to each node's `~/.ssh/authorized_keys`
4. modify `/etc/ssh/sshd_config` to preclude password signin
Now do the rest of the thing, probably using Ansible. To be continued.
## devlog
(top to bottom is oldest to newest)
### 2025-03-13 flashing the BMC
The BMC (Baseboard Management Controller) doesn't seem to be set up with its hostname (at least my router doesn't think so). I tracked down its IP in my router settings and looked at its webpage. It looked very different to what the docs show, so I assume it was a v1 firmware.
I upgraded the firmware with an SD card, and then the web UI looked correct to the docs. Then I flashed the two RK1 nodes.
### 2025-03-17 flashing CM4s
My Raspberry Pi CM4s came in! Now I'm flashing them.
The [docs](https://docs.turingpi.com/docs/raspberry-pi-cm4-flashing-os) aren't foolproof though. I got up to the part where I'm installing `rpiboot` and when I run `make` in the repo directory I get the error
```
cc -Wall -Wextra -g -o rpiboot main.c bootfiles.c decode_duid.c `pkg-config --cflags --libs libusb-1.0` -DGIT_VER="\"5a62a2cc\"" -DPKG_VER="\"local\"" -DBUILD_DATE="\"2025/03/17\"" -DINSTALL_PREFIX=\"/usr\"
Package libusb-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `libusb-1.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libusb-1.0' found
main.c:1:10: fatal error: libusb.h: No such file or directory
1 | #include <libusb.h>
| ^~~~~~~~~~
compilation terminated.
make: *** [Makefile:9: rpiboot] Error 1
```
This being my first time trying to build something that relies on libusb, I changed tack.
The Turing docs show [another way](https://docs.turingpi.com/docs/raspberry-pi-cm4-flashing-os#flashing-from-a-command-line): load the rpi image onto an SD card, mount it onto the BMC filesystem, and flash the drive there.
First I tried a third (undocumented) way: the "flash node" dialog on the BMC UI did not succeed (but at least it failed within a second of trying it)
Turing's way seemed to work.
Hosts `turing-b` and `turing-c` are now set up with hostnames, wifi, authorized_keys (from pangolin and bmc), and no password auth!
### node layout
it would be good for me to know what I have.
\[edit next day: updating this table to reflect the changes I just made based on notes below]
| node | hostname | type | cpu | gpu | memory | disk | peripherals |
| ---- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------- | ----------------------------------------------------- |
| 1 | turing-B | [CM4](https://docs.turingpi.com/docs/raspberry-pi-cm4-intro-specs) ([datasheet](https://datasheets.raspberrypi.com/cm4/cm4-product-brief.pdf)) | Broadcom BCM2711 quad-core Cortex-A72 (ARM v8) 64-bit<br>SoC @ 1.5GHz | | 8 GB | 32 GB eMMC | wifi |
| 2 | turing-C | CM4 | Broadcom BCM2711 quad-core Cortex-A72 (ARM v8) 64-bit<br>SoC @ 1.5GHz | | 8 GB | 32 GB eMMC | wifi |
| 3 | turing-A | [RK1](https://docs.turingpi.com/docs/turing-rk1-specs-and-io-ports) | 8× \| 4× ARM Cortex-A76 \| 4× ARM Cortex-A55 \| DynamIQ | G610 GPU <br>Support OpenGLES 1.1, 2.0, and 3.2, OpenCL up to 2.2 and Vulkan1.2 <br>Proprietary 2D hardware acceleration engine | 16 GB | 32 GB eMMC | 1TB M.2 NVMe (KINGSTON SKC3000S1024G (PCIe Gen 4 x4)) |
| 4 | turing-D | RK1 | 8× \| 4× ARM Cortex-A76 \| 4× ARM Cortex-A55 \| DynamIQ | G610 GPU <br>Support OpenGLES 1.1, 2.0, and 3.2, OpenCL up to 2.2 and Vulkan1.2 <br>Proprietary 2D hardware acceleration engine | 16 GB | 32 GB eMMC | |
The [interconnections](https://docs.turingpi.com/docs/turing-pi2-specs-and-io-ports#interconnections) of the TuringPi board show that RPi modules will not be able to use the M.2 slots (for 2260 or 2280 NVMe drives) on the back. Each node has one of those slots.
> [!cite]- M.2
>
> **M.2**, pronounced _m dot two_\[[1](https://en.wikipedia.org/wiki/M.2#cite_note-1)] and formerly known as the **Next Generation Form Factor** (**NGFF**), is a specification for internally mounted computer [expansion cards](https://en.wikipedia.org/wiki/Expansion_card "Expansion card") and associated connectors. M.2 replaces the Mini SATA ([mSATA](https://en.wikipedia.org/wiki/MSATA "MSATA")) standard and the Mini PCIe ([mPCIe](https://en.wikipedia.org/wiki/MPCIe "MPCIe")) standard.
>
> The M.2 specification supports NVM Express (NVMe) as the logical device interface for M.2 PCI Express SSDs.
>
> The M.2 standard allows module widths of 12, 16, 22 and 30 mm, and lengths of 16, 26, 30, 38, 42, 60, 80 and 110 mm. Initial line-up of the commercially available M.2 expansion cards is 22 mm wide, with varying lengths of 30, 42, 60, 80 and 110 mm.\[[3](https://en.wikipedia.org/wiki/M.2#cite_note-sata-io-m.2-3)]\[[5](https://en.wikipedia.org/wiki/M.2#cite_note-m.2-intro-orvem-5)]\[[14](https://en.wikipedia.org/wiki/M.2#cite_note-te-ngff-14)]\[[18](https://en.wikipedia.org/wiki/M.2#cite_note-18)] The codes for the M.2 module sizes contain both the width and length of a particular module; for example, "2242" as a module code means that the module is 22 mm wide and 42 mm long, while "2280" denotes a module 22 mm wide and 80 mm long.
>
> ([wikipedia](https://en.wikipedia.org/wiki/M.2))
> [!cite]- NVMe
>
> NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open, logical-device interface [specification](https://en.wikipedia.org/wiki/Functional_specification "Functional specification") for accessing a computer's [non-volatile storage](https://en.wikipedia.org/wiki/Non-volatile_storage "Non-volatile storage") media usually attached via the [PCI Express](https://en.wikipedia.org/wiki/PCI_Express "PCI Express") bus.
>
> ([wikipedia](https://en.wikipedia.org/wiki/NVM_Express))
Additionally, Node 1 has Mini PCIe with SIM card and USB 2.0. Node 2 has Mini PCIe. Node 3 has 2x SATA3. Node 4 has 4x USB 3.0.
Not sure what the best layout will be for my storage needs. Also not sure of my storage needs.
Node 4 is probably best suited for an RK1, which can benefit from its NVMe slot. (RK1 can support PCIe 3.0 x4)
> [!question] Which is best for storage, Mini PCIe, SATA3, or USB3.0?
> SATA3 is slightly better than USB 3.0.
> SATA3 is worse than USB 3.1 gen 2.
>
> If I want the two CM4s to be identical I can put them in nodes 1 and 2 and give them each an identical mini PCIe drive.
>
> Then the two RK1s can go in 3 and 4 and have identical M.2 drives. That sounds good.
> [!cite]- mPCIe
>
> PCI Express (Peripheral Component Interconnect Express)... is a high-speed [serial](https://en.wikipedia.org/wiki/Serial_communication "Serial communication") [computer](https://en.wikipedia.org/wiki/Computer "Computer") [expansion bus](https://en.wikipedia.org/wiki/Expansion_bus "Expansion bus") standard...
>
> The PCI Express electrical interface is measured by the number of simultaneous lanes. (A lane is a single send/receive line of data, analogous to a "one-lane road" having one lane of traffic in both directions.)
>
> ([wikipedia](https://en.wikipedia.org/wiki/PCI_Express))
RK1 can support PCIe 3.0 x4 (3.938 GB/s).
CM4 can support PCIe Gen 2 x1 (0.500 GB/s)
CM5 can support PCIe Gen 2 x1 (0.500 GB/s), but Jeff Geerling at least says you can get a speed boost by using a gen 3 card. Shrug.
([speed comparison table](https://en.wikipedia.org/wiki/PCI_Express#Comparison_table))
The kubernetes guide also shows that we can use mini PCIe to SATA adapters, to really expand how many SATA drives we can connect (we can set up 3 CM4s with identical drives)
### sidebar: ansible? nomad? k3s? rke2?
I read a bit about nomad vs kubernetes. They seem to be direct competitors, albeit with slightly different markets. While I think I need to learn k8s for my career, in the long run it sounds like it makes more sense for me to run the homelab on nomad.
Within kubernetes, k3s and rke2 both have my attention. Former because it's designed for edge (not to mention the Turing Pi guide uses it), the latter for its robustness.
Regardless I should pick up a configuration manager like Ansible (Chef, Puppet) next.
### Configuration Management
I'm starting with Ansible for a few reasons:
1. [roadmap.sh](https://roadmap.sh/devops) recommends it
2. Jeff Geerling has a book about it
3. [this one blog post](https://blog.aleksic.dev/using-ansible-and-nomad-for-a-homelab-part-1) talks about switching from Chef to Ansible
### sidebar: monitoring
`lsblk` lists block devices
`sudo fdisk -l` shows disk identifiers
`lsusb` and `lsusb -t` list usb devices
from https://www.jeffgeerling.com/blog/2025/top-10-ways-monitor-linux-console:
s-tui can monitor or stress CPUs (to test things like will the fans come on)
htop is nicer than top, but leaner than some other top-likes
atop shows weird things you may not check with other tools
iftop is for network traffic on a particular interface
iotop is that but for disk bandwidth
sysdig / csysdig are more advanced disk access tools
nvtop works with amd, nvidia, intel, and apple gpus
btop is "the lamborghini of tops"
perf (`sudo apt install linux-perf`) does system profiling.
wavemon for wifi signal monitoring