Hyper-Threading is a well-established CPU technology that transforms a single physical CPU core into two separate logical processors by sharing certain execution resources, enhancing parallelism and efficiency. I am going to write a 3-part article series on this technology to explore its ins and outs, though these will only scratch the surface of its complexity.
- What Is CPU Hyper-Threading?
- How Hyper-Threading Skews CPU Usage Readings
- Does Hyper-Threading Boost CPU Performance—and by How Much?
What Is CPU Hyper-Threading?
To fully grasp Hyper-Threading, it’s important to consider it within the context of CPU architecture. A generic/simplified CPU architecture is pictured below:

Components of a CPU:
- CPU Socket The CPU socket is the physical interface on the motherboard that houses the CPUs. It provides the electrical and mechanical connections necessary for the CPU to communicate with the rest of the computer system. Sockets are designed to match specific CPU models and types, ensuring compatibility.
- Core A core is an independent processing unit within the CPU capable of execution instructions. Modern CPU typically contain multiple cores, allowing them to perform multiple tasks simultaneously. Each core has it own set of resources, such as registers, execution units and L1/L2 cache, but shares certain resources like L3 cache and memory interfaces with other cores.
- Logical CPU Hyper-threading technology enables a single physical core to appear as two logical CPUs. Each logical processor has its own architectural state but shares the core’s execution resources. This design enables one logical processor to utilize resources when the other is stalled, thereby improving parallelism and efficiency.
The CPU section on the top of Oracle AWR report provides detailed information about the CPU hardware on which the Oracle database is running. Here’s an example.

In this example, the motherboard contains four sockets. Each socket accommodates ten cores, amounting to a total of 40 cores. With hyper-threading, each core appears as two logical CPUs, resulting in 80 CPUs.
You can use the lscpu command or view the contents of /proc/cpuinfo to gather detailed information about your CPU architecture on a Linux system. Here is the output from the lscpu command run on the sever:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-4650 v2 @ 2.40GHz
Stepping: 4
CPU MHz: 2767.766
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 4800.03
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-79

IBM takes a bolder approach than Intel with its Simultaneous Multi-Threading (SMT) technology, a powerful evolution of Intel’s Hyper-Threading. On this high-performance IBM Power server, SMT-8 transforms each physical core into eight logical processors. With 120 cores, this yields an impressive 960 logical CPUs in total—though the exact number of sockets remains unspecified here.
That’s the basics of Hyper-Threading! Next, I’ll dive into how it skews CPU usage readings (Part 2) and whether it truly boosts performance (Part 3). Follow along for more!





Leave a comment