Well-Known Member
May 15, 2012
Cape Town, South Africa
cPanel Access Level
Root Administrator
We have been troubleshooting site slowness on a particular server.

What is weird however is we have always used MPM event or MPM worker. But if we switch to MPM Prefork we are noticing it is performing 10s + faster on page load speed vs the other two. The busy site we tested with is usually around 35s on Gtmetrix but now its 19s. That a huge performance gain I would assume.

We even installed Litespeed on the server which made no difference. Only switching to MPM Prefork seems to work. I really dont get it.

Is this normal for shared hosting? or it just the tuning of MPM Event or Worker that could be at fault?
Last edited:


Product Owner II
Staff member
Nov 14, 2017
My assumption is the increased speed is a result of the fact that perfork consumes more memory resulting in higher load in trade for the faster performance:

I don't particularly care for this sites grammatical issues but it does explain the benefits/downfalls of the MPMs pretty well: said:
Prefork MPM:-
Prefork MPM launches multiple child processes. Each child process handle one connection at a time.

Prefork uses high memory in comparison to worker MPM. Prefork is the default MPM used by Apache2 server. Preform MPM always runs few minimum (MinSpareServers) defined processes as spare, so new requests do not need to wait for new process to start.

Worker MPM:-
Worker MPM generates multiple child processes similar to prefork. Each child process runs many threads. Each thread handles one connection at a time.

In sort Worker MPM implements a hybrid multi-process multi-threaded server. Worker MPM uses low memory in comparison to Prefork MPM.

Event MPM:-
Event MPM is introduced in Apache 2.4, It is pretty similar to worker MPM but it designed for managing high loads.

This MPM allows more requests to be served simultaneously by passing off some processing work to supporting threads. Using this MPM Apache tries to fix the ‘keep alive problem’ faced by other MPM. When a client completes the first request then the client can keep the connection open, and send further requests using the same socket, which reduces connection overload.