
One week ago, Intel officially launched its first Arc GPU. The company will first offer mobile arc chips with a desktop GPU later this year. While the launch certainly answered a lot of questions, it also raised several new questions at the same time. One of the most pertinent questions is related to its ad clock speed. As it turns out, Intel is not following in the footsteps of its competitors in the way it lists GPU clock speeds. This led to some confusion during the launch, but Intel Extremetech clarified the situation.
To the right of the bat, the mobile arc GPUs seem to have a slower clock speed. Intel’s chart (below) only mentions “graphics clock”, there is no further explanation for whether it is base clock or boost clock. Nvidia and AMD do not work this way. AMD lists a “game frequency” and says it is the clock speed that anyone can expect while playing a game. For example, the game frequency of its RX 6800M is listed as 2,300MHz. Nvidia offers a range of clock speeds and is called the “boost clock” of the GPU. This means that you should expect the clock speed of the GPU to increase within the range within your load. For the RTX 3080 Ti mobile, it ranges from 1,125Mhz to 1,590Mhz. By comparison, Intel lists the graphics watch for its entry-level A350M as only 1,150Mhz, which is less even for a novice GPU. So what gives Chipzilla?
As it turns out, the Intel list of clock speeds is kind of meaningless. According to Intel, they will basically achieve the lowest clock speed GPU. In their testing of a whole bunch of chips, this was the worst case scenario across a variety of applications. This means that its clock speed can be much higher in certain applications like gaming. It may even be 2GHz and higher in some games, but much lower if it is thermally limited. It’s like thermally throttling the chip, but remember it’s a mobile GPU. Intel plans to stick them on a variety of laptops, an array of sizes and thermal solutions. The way we understand it, it means Intel is playing it safe. Instead of advertising a specific clock speed that the card cannot hit a specific laptop under certain conditions, it simply forces it less. Interestingly, if you visit Intel’s website for its entry-level A350M, it describes the listed clock speeds as “base clock”. There is no mention of a boost watch.

A visualization of the possible clock range. (Photo: Intel)
This is an amazing way to start a family of GPUs. Companies usually tend to provide numbers at launch that paint their products in the best light possible. Apple recently ran into a misleading chart saying its M1 Ultra chip is faster than the RTX 3090. Heck, graphics card companies are always known for launching GPUs with obscure performance numbers that have no real world value. Despite this situation, Intel has taken the opposite approach.
On the one hand, we appreciate its honesty. However, it would also be helpful to know what the maximum clock might be in gaming, as it is a GPU. We all know that “results can vary,” so what’s the harm in just providing that information? This is why Nvidia offers a wide range of clock speeds and why AMD adds a waiver to its number. It states, “‘Game frequency’ is the expected GPU clock. When a normal gaming application is run, the normal TGP (Total Graphics Power) is set. The results of the actual personal watch clock may vary. “
In the end, what matters most is performance, not the claimed frequency, but gamers need to keep in mind that AMD, Nvidia, and now Intel measure some of these metrics differently and consequently make different claims.
Read now: