Is it easy to make a stack of pcs into a server on this operating system (standard version)?
Sounds like what you want is a hypervisor. While windows server can do this, it's pretty expensive.
Check out proxmox. Community edition is free. Configured with Ceph it's a very capable hyper-converged "single pane" solution for hosting virtualized servers from. At home I run my firewall on it (pfsense), fileserver (freenas), a couple instances of security onion (to practice SIEM), VM's for BOINC and FAH, an instance of debian to host my ubiquity wifi controller from, and a couple windows server evaluation instances for learning/practice.
I have a 4-node cluster made up of 2 X SuperMicro Ivy Bridge era Fat Twins on a 12U Rack. Works beautifully. (total cluster is 48 cores, 320GB RAM, 12 X 500GB SSD, 8X 1TB HDD and 8 X 4TB HDD).
Does ram stack up from all nodes
CPU's and RAM are available to virtual machines running on their respective nodes. Any single VM can only use CPU and RAM resources from a single node at a time.
With Ceph storage, storage pools spanning drives across many nodes can be defined.
The idea is that you have say 4-5+ computers, arranged in a way that you can spin up virtual machines anywhere there is available resources to do so, and with ceph, a virtual machine's virtual disks are accessible to all nodes at any time, so VM's can be migrated from node to node to keep services up through maintenance or hardware failure, etc.
Do you know how many motherboard can fit in what u size case, or does it matter how i mount them
Honestly you'll get way more for your money and time buying refurbished servers from awhile back.
i dont know stuff about servers i think i will use normal pc cases
You can do that, but it's a huge mess. Rack mount computers, switches, etc, will consolidate the mess and be easier to service and upgrade.
so i guess i can squeeze a system into a 1u
Check out old supermicro Fat Twins. They are arranged with 2 nodes side-by-side in 2U, so you get 4 nodes in 4U space, but cooled by 80mm fans rather than 40mm (quieter), and 24 drive bays in 4U rather than only 16, and better PCIE expansion options than 1U servers.
i just want to make more servers to work as one for high loads with conection over atleast 1gb/s thats not just lan and maybe other stuff too like firewall
Achieving >1Gb/s routing or file serving is not a task that requires a cluster of servers. Even a very basic computer with a 10G network card could accomplish that if properly configured and connected to other 10G devices.
The purpose of clustering, is to virtualize the server infrastructure for easier management, easier backup/recovery, hardware failure tolerance, etc...
Unless you have specific software that is designed to distribute a load to multiple computers, and you spin up that software on all of the nodes, then you're not going to "scale" the performance of a virtualized server across multiple nodes. It doesn't work that way.
can you help me with this question:is the cluster nodes (like 3 pcs) a cluster controller (a stronger pc) 2 switches and 1 router enough to make them work as one (all have 1gb port on mobo and i add another pcie x1 1gb for connection to the bigger pc? can it work with 4 cluster pcs and without another pc?