That is a common problem. SIMD can be slower than non-SIMD. Consider this problem:
The goal was to emulate a 4-way PowerPC TLB on x86-64. Four uint32_t values had to be compared to find a match. The data structure was roughly "uint32_t array[512][4][4]", laid out so that the 4 uint32_t values would be adjacent.
It simply didn't perform OK. Getting the equality test results out of SIMD was lengthy, awkward, and slow.
That task was so perfect for SIMD, and yet SIMD failed at it. The data was the exactly correct size of an SSE XMM register. It was aligned. The task was a simple parallel operation.
Based on your description, here’s what you should do to vectorize that code.
1. If you don’t have AVX, a good way to broadcast integer from scalar register to vector is _mm_cvtsi32_si128 followed by _mm_shuffle_epi32( v, 0 )
2. To compare them for equality, _mm_cmpeq_epi32
3. Getting index of the first match is 2 instructions, MOVMSKPS and BSF.
Getting compiler to emit them is a bit awkward, though. You first need _mm_castsi128_ps to be able to call _mm_movemask_ps. Test the integer for 0 afterwards, if zero, none of the 4 lanes were equal.
The portable way to emit BSF is only introduced in C++/20. In the current version of the language you have to use preprocessor to detect compiler, use _BitScanForward for msvc, __builtin_ctz for gcc/clang.
If you want count of matches, replace BSF with POPCNT. Again, in current version of the language it’s compiler specific, __popcnt for msvc, __builtin_popcount for gcc/clang.
P.S. If you only need a single boolean saying if none of the 4 lanes matched / any of the lanes matched, use _mm_test_all_zeros / _mm_test_mix_ones_zeros instead of _mm_movemask_ps. Or if you want to test more than 1 cache entry, leave the comparison result in a vector register, compare more entries, combine results with bitwise instructions.
Update: If you don’t need index or count of matches but want to individually test all 4 matches with scalar code, on old CPUs _mm_movemask_epi8 is slightly faster because cross-domain latency, test the result for bits 1, 0x10, 0x100, 0x1000.
I wouldn't characterize this as "perfect" for SIMD!
Perfect for SIMD usually means a significant amount of calculation that can be done vector-wise (you could include contiguous data movement in that definition).
Here, you are doing exactly one (cheap) calculation: the compare, and one vectorized load, and you want to feed the results to a branch, presumably.
You are only saving a few instructions versus scalar and pay a vector to GP penalty.
The penalty is quite small, 1-3 cycles each direction. RAM latency is 1-2 orders of magnitude more than that, even L1D level of cache is many cycles away. Replacing multiple scalar RAM loads with 1 vector load is usually a good idea performance wise. This is true even if you’ll then use extract instructions to access the lanes. Extract latency is 2-3 cycles, much faster than RAM.
I think what might have happened, GP tried to use SSE for dealing with individual lanes. Better approach for that use case is moving the comparison results to scalar register with a single movmskps, pmovmskb, or ptest instruction, just once for the complete vector.
Yes, the penalty is small, but the total amount of vectorized work is also very small!
L1D is not many cycles away: it is 4 or 5 for scalar loads, 6 or 7 for xmm or ymm loads. If the load misses, it doesn't much matter if it's a scalar or vector load: the time to fetch the cache line is the same.
So a scalar load of 5 cycles looks much better, latency-wise, than a vector load of 6 cycles, plus an extract of 1-3 cycles.
Of course, you need only 1 vector load vs 4 GP loads, but the latencies are overlapped.
Furthermore, the extracts can happen on a single port: so even though you have 512 bits/cycle of "contiguous" vector load bandwidth, you then suck those loads though a 32 bits/cycle extract straw [1]? 32-bit GP loads have 64 bits/cycle bandwidth and the value goes directly to the GP register, or even micro-fused with the ALU op.
So no, it is not an obvious win to load 4x32-bit values with a vector load and then bring them over to GP registers. Even if it might sometimes be slightly better, this is hardly "perfect" for vectorization, rather I'd say it is "quite poor candidate for vectorization".
Also, if the goal is to set a flag and jump on it, you'll still end up needing a scalar comparison anyway, so actually for the computation part there is no savings.
Don't forget the thing you are comparing to: presumably it starts in a GP register, so you need some kind of GP->SIMD move and then a broadcast to prepare the comparison.
> I think what might have happened, GP tried to use SSE for dealing with individual lanes. Better approach for that use case is moving the comparison results to scalar register with a single movmskps, pmovmskb, or ptest instruction, just once for the complete vector.
Right, well who know what they tried to do or how the surrounding code works. I agree the approach you suggest sounds like it should be a slight win sometimes, but the key word is "slight". If the surrounding code is general purpose code and the inputs and outputs come from and go to GP registers, this is just "too small" to vectorize well. It's a common misconception that say comparing values is the bulk of the work, so of course vectorization will be a 4x win, but actually all the surrounding stuff takes most of the work, much more than a comparison which can execute 4 per cycle on the scalar side.
---
[1] You can try other tricks like extracting 64-bits and then messing around in the GP reg to split the halves, but it's basically a wash.
The goal was to emulate a 4-way PowerPC TLB on x86-64. Four uint32_t values had to be compared to find a match. The data structure was roughly "uint32_t array[512][4][4]", laid out so that the 4 uint32_t values would be adjacent.
It simply didn't perform OK. Getting the equality test results out of SIMD was lengthy, awkward, and slow.
That task was so perfect for SIMD, and yet SIMD failed at it. The data was the exactly correct size of an SSE XMM register. It was aligned. The task was a simple parallel operation.