Sse Intrinsics Comparison Essay

SSSE3: fast popcount

Author:Wojciech Muła
Added on:2008-05-24
Last update:2017-01-28 (link to the XOP variant), 2016-11-27 (link to the paper)

Introduction

Population count is a procedure of counting number of ones in a bit string. Intel introduced instruction with SSE4.2 instruction set. The instruction operates on 32 or 64-bit words.

However SSSE3 has powerful instruction . This instruction can be used to perform a parallel 16-way lookup; LUT has 16 entries and is stored in an XMM register, indexes are 4 lower bits of each byte stored in another XMM register.

Vector algorithm

With help of we can get a vector that contains population count for 16 nibbles. To get a vector of population count for each 16 byte, instruction have to be called twice on vectors of lower and higher nibbles, and finally added together.

Following code shows the idea:

; xmm0 - input (16 bytes) ; xmm7 - POPCOUNT_4bit -- lookup table ; xmm6 - MASK_bits03 = packed_byte(0x0f) -- mask 4 lower bits movdqa %%xmm0, %%xmm1 psrlw $4, %%xmm1 pand %%xmm6, %%xmm0 ; xmm0 - lower nibbles pand %%xmm6, %%xmm1 ; xmm1 - higher nibbles movdqa %%xmm7, %%xmm2 ; since instruction pshufb modifies LUT movdqa %%xmm7, %%xmm3 ; it must be saved for further use pshufb %%xmm0, %%xmm2 ; xmm2 = vector of popcount for lower nibbles pshufb %%xmm1, %%xmm3 ; xmm3 = vector of popcount for higher nibbles paddb %%xmm3, %%xmm2 ; xmm2 += xmm3 -- vector of popcount for bytes

The last step is adding all bytes from vector.

Instruction calculate sum of absolute differences of unsigned bytes — if the first arguments is full of zeros, then result is a sum of bytes from second argument. Unfortunately invoked with 128-bits arguments calculate separate sums for bytes 0..7 and 8..15, and finally stores them in the lower and the higher quad words. Because of that few additional instructions are needed:

pxor %%xmm0, %%xmm0 ; xmm0 = packed_byte(0x00) psadbw %%xmm0, %%xmm3 ; xmm3 = [popcount of bytes 0..7 | popcount of bytes 8..15] movhlps %%xmm3, %%xmm0 ; xmm0 = [ 0 | popcount of bytes 0..7 ] paddd %%xmm3, %%xmm0 ; xmm0 = [ not needed | popcount of bytes 0..15]

Further improvements

has 3 or 4 cycles latency, also additional instructions need some time to execute (I guess around 2 cycles).

doesn't need to be called in every iteration — since max values of popcount for single byte is 8, we can perform up to parallel additions () without overflow. Moreover, partial sums returned by could be added together in the end.

Pseudocode:

pxor %%xmm5, %%xmm5 // global accumulator while (bytes to end > 0) { pxor %%xmm4, %%xmm4 // local accumulator (for inner loop) n = min(bytes to end/16, 31) // up to 31 blocks for (i=0; i < n; i++) { // calculate xmm3, a vector of popcount for bytes paddb %%xmm3, %%xmm4 // xmm4 += xmm3 -- update local acc. } pxor %%xmm0, %%xmm0 psadbw %%xmm4, %%xmm0 // xmm4 -- calculate two popcounts // update global acc. paddd %%xmm4, %%xmm5 } // add halfs of global accumulator movhlps %%xmm5, %%xmm0 paddd %%xmm5, %%xmm0 movd %%xmm0, %%eax // eax = population count for all bytes

Source code

Github repository contains the original code from 2008 and also the new C++11 (2015, 2016), intrinsics-based implementation.

Experiments (64-bit code)

Program from the repository were run with default settings ( and ) and repeated several times. Minimal measurements were considered.

Below is the list of procedures listed in here. The repository has more variants.

procedureimplementation
lookup-8LUT-based procedure ()
lookup-64LUT-based procedure (), avoid zero-extend
bit-parallelwell know method, described for example in Bit Twiddling Hacks
bit-parallel-optimizedabove + the trick from section "Further improvements"
harley-sealHarley-Seal variant
sse-bit-parallelSSE variant of
sse-lookupthe method described in this text using SSE instructions
sse-lookupthe method described in this text using AVX2 instructions
cpu instruction emitted via intrinsic

Core i5 (Westmere)

The CPU architecture: Core i5 M540 @ 2.53GHz (Westmere)

More details in a separate file.

procedure32 B64 B128 B256 B512 B1024 B2048 B4094 B
lookup-82.298842.200392.150862.128303.409853.386323.373343.36643
lookup-642.298372.199792.150672.126083.401123.381353.371653.36490
bit-parallel2.136452.006521.934061.905673.012412.996612.991122.99828
bit-parallel-optimized1.378121.239701.161831.138771.790161.770861.759891.78260
harley-seal1.476581.299220.794240.634320.901970.861940.834910.85399
sse-bit-parallel2.694182.400011.407930.956521.170031.001290.923820.86693
sse-lookup0.755280.541950.349420.310780.472110.456940.446500.46007
cpu0.492830.377990.320580.291850.443600.432130.426370.36332

CPU outperforms the code described here.

Core i7 (Haswell)

The CPU architecture: Haswell i7-4770 CPU @ 3.40GHz.

More details in a separate file.

procedure32 B64 B128 B256 B512 B1024 B2048 B4094 B
lookup-81.204081.109381.063121.107221.699221.663151.641131.63397
lookup-641.177751.099941.063741.091021.675791.645481.623901.61094
bit-parallel1.267681.105531.052221.026261.620861.610241.604951.61435
bit-parallel-optimized1.002330.825450.722460.674541.041131.023661.017081.03801
harley-seal1.002600.795970.501160.394400.545530.502770.481390.48978
sse-bit-parallel2.152062.020081.093930.667770.757170.611790.537910.49923
sse-lookup0.535200.339020.213790.180610.266880.256050.250540.25780
avx2-lookup0.530680.339200.213730.135790.171330.155000.142930.17005
cpu0.294800.240510.154780.132700.200520.194620.208430.21684

AVX2 code is faster than the dedicated instruction for input size 512 bytes and larger.

Core i7 (Skylake)

The CPU architecture: Skylake i7-6700 CPU @ 3.40GHz

More details in a separate file.

procedure32 B64 B128 B256 B512 B1024 B2048 B4094 B
lookup-81.029560.948361.046710.956231.460181.423731.406331.39675
lookup-641.007040.943871.032330.947441.456291.423711.407471.39947
bit-parallel1.056620.952970.909920.889081.405871.397531.393371.41585
bit-parallel-optimized0.812780.690910.633290.604530.944430.933200.927600.95122
harley-seal0.812830.663970.433480.340350.468710.430770.411810.41767
sse-bit-parallel2.254321.709800.924430.576180.681890.582200.501690.45568
sse-lookup0.407490.298020.182900.157420.239560.230570.226570.23177
avx2-lookup0.433500.263680.179500.115870.148810.137290.127980.14222
cpu0.216760.162560.135460.121920.184230.220650.196430.20293

Again AVX2 code is faster than the dedicated instruction for input size 256 bytes and larger.

Experiments (32-bit code) — outdated

Note 2016-03-13: this section refers to results from 2008.

ssse3_popcount.c is a test program that contains implementations of following procedures:

  • — popcount based on LUT with 256 entries; I tested GCC , however it was much slower than my implementation
  • — straightforward SSSE3 implementation
  • — improved SSSE3 implementation
  • — with inner loop unrolled 4 times
  • — SSE2 bit-level parallel implementation
  • — improved SSE2 implementation (using the same tricks as SSSE3 version)
  • — with inner loop unrolled 4 times

The first argument of the program is a function name, the second is the number of 16-byte chunks processed by the selected procedure in one iteration and the third is the iterations number.

The table shows results for different chunk count; test script I've used is available. Program was compiled with following options:

gcc -O2 -DALIGN_DATA ssse3_popcount.c -o ssse3_popcount

Tests were run on my Linux box, with Core 2 Duo E8200;

Results clearly show, that the method presented above brings significant speedup, which depends on the data size.

The straightforward SSSE3 implementation is 2-2.8 times faster, the improved around 3 times, and the unrolled 4-5 times.

procedurenumber of 16-byte chunksiterationstime [s]speedup
120,000,0000.22100%
0.19115%
0.20110%
0.14157%
0.16137%
820,000,0001.42100%
0.92154%
0.79179%
0.61232%
0.52273%
322,000,0000.55100%
0.34161%
0.30183%
0.22250%
0.22250%
0.19289%
0.12458%
128200,0000.21100%
0.13161%
0.11190%
0.08262%
0.08262%
0.07299%
0.04525%
512200,0000.86100%
0.53162%
0.45191%
0.34252%
0.34252%
0.26330%
0.18477%
1024200,0001.73100%
1.07161%
0.90192%
0.68254%
0.69250%
0.52332%
0.38455%
2048200,0003.47100%
2.14162%
1.80192%
1.37253%
1.38251%
1.06327%
0.76456%

Acknowledgments

Daniel Lemire has gave me access to computers with Haswell and Skylake processors, thanks to that I could play with AVX2 code and run tests. Kim Walisch contributed the Harley-Seal implementation. There were some fixes and enhancements to sample code by various people. Thank you.

See also

  • Population count using XOP instructions

  • Paper by Daniel Lemire, Nathan Kurz and me: Faster Population Counts using AVX2 Instructions.

  • Hacker news discussion.

  • Speeding up bit-parallel population count — delaying byte-wise sum (the trick with ) applied for bit parallel method gives 50% speedup over plain, SWAR 64-bit procedure.

  • Anatomy of High-Performance 2D Similarity Calculations — paper contains interesting comparison of similarity calculations which heavily use popcount operation. The authors compared 4 basic methods: 1) hardware-based, i.e. instruction, 2) simple LUT, 3) bit-level parallel method (SSE2 procedure), and 4) the method described here. The most important observation, from my point of view of course, is that the speed of SSSE3 code is comparable to hardware , it is just a bit slower.

    The authors published also the full source code and I noticed they manually unrolled inner loop. I did the same in my code and speedup increased from 3 to 4-5 — sources and article has been updated.

  • Benchmarking CRC32 and PopCnt instructions — Peter Kankowski compared speed of SSE4.2 instructions and against software implementations. Hardware CRC32 is significantly faster, but population count is slightly slower than the algorithm presented here

Changelog

  • 2016-03-16 — detailed results from Westmere, Haswell & Skylake
  • 2016-03-13 — AVX2 implementation, results from Westmere and Skylake, wording
  • 2015-03-04 — link to github repo
  • 2011-10-22 — link to paper, unrolled loops
  • 2010-06-28 — link to Peter's site
  • 2010-03-27 — SSE2 procedures

For the cryptographic hash function, see SIMD (hash function).

This article's factual accuracy may be compromised due to out-of-date information. Please update this article to reflect recent events or newly available information.(March 2017)

Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Thus, such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but only a single process (instruction) at a given moment. SIMD is particularly applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions to improve the performance of multimedia use.

History[edit]

The first use of SIMD instructions was in the vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector-processing architectures are now considered separate from SIMD machines, based on the fact that vector machines processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD machines process all elements of the vector simultaneously.[1]

The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking MachinesCM-1 and CM-2. These machines had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar MIMD approaches based on commodity processors such as the Intel i860 XP[2] became more powerful, and interest in SIMD waned.

The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this particular type of computing power, and microprocessor vendors turned to SIMD to meet the demand.[3] Sun Microsystems introduced SIMD integer instructions in its "VIS" instruction set extensions in 1995, in its UltraSPARC I microprocessor. MIPS followed suit with their similar MDMX system.

The first widely deployed desktop SIMD was with Intel's MMX extensions to the x86 architecture in 1996. This sparked the introduction of the much more powerful AltiVec system in the MotorolaPowerPC's and IBM's POWER systems. Intel responded in 1999 by introducing the all-new SSE system. Since then, there have been several extensions to the SIMD instruction sets for both architectures.

All of these developments have been oriented toward support for real-time graphics, and are therefore oriented toward processing in two, three, or four dimensions, usually with vector lengths of between two and sixteen words, depending on data type and architecture. When new SIMD architectures need to be distinguished from older ones, the newer architectures are then considered "short-vector" architectures, as earlier SIMD and vector supercomputers had vector lengths from 64 to 64,000. A modern supercomputer is almost always a cluster of MIMD machines, each of which implements (short-vector) SIMD instructions. A modern desktop computer is often a multiprocessor MIMD machine where each processor can execute short-vector SIMD instructions.

Advantages[edit]

An application that may take advantage of SIMD is one where the same value is being added to (or subtracted from) a large number of data points, a common operation in many multimedia applications. One example would be changing the brightness of an image. Each pixel of an image consists of three values for the brightness of the red (R), green (G) and blue (B) portions of the color. To change the brightness, the R, G and B values are read from memory, a value is added to (or subtracted from) them, and the resulting values are written back out to memory.

With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "retrieve this pixel, now retrieve the next pixel", a SIMD processor will have a single instruction that effectively says "retrieve n pixels" (where n is a number that varies from design to design). For a variety of reasons, this can take much less time than retrieving each pixel individually, as with traditional CPU design.

Another advantage is that the instruction operates on all loaded data in a single operation. In other words, if the SIMD system works by loading up eight data points at once, the operation being applied to the data will happen to all eight values at the same time. This parallelism is separate from the parallelism provided by a superscalar processor; the eight values are processed in parallel even on a non-superscalar processor, and a superscalar processor may be able to perform multiple SIMD operations in parallel.

Disadvantages[edit]

  • Not all algorithms can be vectorized easily. For example, a flow-control-heavy task like code parsing may not easily benefit from SIMD; however, it is theoretically possible to vectorize comparisons and "batch flow" to target maximal cache optimality, though this technique will require more intermediate state. Note: Batch-pipeline systems (example: GPUs or software rasterization pipelines) are most advantageous for cache control when implemented with SIMD intrinsics, but they are not exclusive to SIMD features. Further complexity may be apparent to avoid dependence within series such as code strings; while independence is required for vectorization.
  • Large register files which increases power consumption and require chip area.
  • Currently, implementing an algorithm with SIMD instructions usually requires human labor; most compilers don't generate SIMD instructions from a typical C program, for instance. Automatic vectorization in compilers is an active area of computer science research. (Compare vector processing.)
  • Programming with particular SIMD instruction sets can involve numerous low-level challenges.
    • SIMD may have restrictions on data alignment; programmers familiar with one particular architecture may not expect this.
    • Gathering data into SIMD registers and scattering it to the correct destination locations is tricky (sometimes requiring permute operations) and can be inefficient.
    • Specific instructions like rotations or three-operand addition are not available in some SIMD instruction sets.
    • Instruction sets are architecture-specific: some processors lack SIMD instructions entirely, so programmers must provide non-vectorized implementations (or different vectorized implementations) for them.
    • The early MMX instruction set shared a register file with the floating-point stack, which caused inefficiencies when mixing floating-point and MMX code. However, SSE2 corrects this.

Chronology[edit]

Examples of SIMD supercomputers (not including vector processors):

  • ILLIAC IV, c. 1974
  • ICL Distributed Array Processor (DAP), c. 1974
  • Burroughs Scientific Processor, c. 1976
  • Geometric-Arithmetic Parallel Processor, from Martin Marietta, starting in 1981, continued at Lockheed Martin, then at Teranex and Silicon Optix
  • Massively Parallel Processor (MPP), from NASA/Goddard Space Flight Center, c. 1983-1991
  • Connection Machine, models 1 and 2 (CM-1 and CM-2), from Thinking Machines Corporation, c. 1985
  • MasPar MP-1 and MP-2, c. 1987-1996
  • Zephyr DC computer from Wavetracer, c. 1991
  • Xplor, from Pyxsys, Inc., c. 2001.

Hardware[edit]

Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) for Alpha. SIMD instructions can be found, to one degree or another, on most CPUs, including the IBM's AltiVec and SPE for PowerPC, HP's PA-RISCMultimedia Acceleration eXtensions (MAX), Intel's MMX and iwMMXt, SSE, SSE2, SSE3SSSE3 and SSE4.x, AMD's 3DNow!, ARC's ARC Video subsystem, SPARC's VIS and VIS2, Sun's MAJC, ARM's NEON technology, MIPS' MDMX (MaDMaX) and MIPS-3D. The IBM, Sony, Toshiba co-developed Cell Processor's SPU's instruction set is heavily SIMD based. NXP founded by Philips developed several SIMD processors named Xetal. The Xetal has 320 16bit processor elements especially designed for vision tasks.

Modern graphics processing units (GPUs) are often wide SIMD implementations, capable of branches, loads, and stores on 128 or 256 bits at a time.

Intel's AVX SIMD instructions now process 256 bits of data at once. Intel's Larrabee prototype microarchitecture includes more than two 512-bit SIMD registers on each of its cores (VPU: Wide Vector Processing Units), and this 512-bit SIMD capability is being continued in Intel's Many Integrated Core Architecture (Intel MIC) and Skylake-X.

Software[edit]

SIMD instructions are widely used to process 3D graphics, although modern graphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression. They are also used in cryptography.[4][5][6] The trend of general-purpose computing on GPUs (GPGPU) may lead to wider use of SIMD in the future.

Adoption of SIMD systems in personal computer software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like MMX and 3DNow!, offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the FPU and MMX registers. Compilers also often lacked support, requiring programmers to resort to assembly language coding.

SIMD on x86 had a slow start. The introduction of 3DNow! by AMD and SSE by Intel confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math libraries that use SIMD instructions, and open source alternatives like libSIMD, SIMDx86 and SLEEF have started to appear.

Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest. AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers from Motorola, IBM and GNU, therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example iTunes and QuickTime. However, in 2006, Apple computers moved to Intel x86 processors. Apple's APIs and development tools (XCode) were modified to support SSE2 and SSE3 as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and Freescale Semiconductor and even though they abandoned the platform, further development of AltiVec is continued in several Power Architecture designs from Freescale and IBM. On WWDC '15, Apple announced SIMD Vectors support for version 2.0 of their new programming language Swift.

SIMD within a register, or SWAR, is a range of techniques and tricks used for performing SIMD in general-purpose registers on hardware that doesn't provide any direct support for SIMD instructions. This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly.

Microsoft added SIMD to .NET in RyuJIT.[7] Use of the libraries that implement SIMD on .NET are available in NuGet package Microsoft.Bcl.Simd[8]

SIMD on the web[edit]

In 2013 John McCutchan announced[9] that he had created a performant interface to SIMD instruction sets for the Dart programming language, bringing the benefits of SIMD to web programs for the first time. The interface consists of two types:

  • Float32x4, 4 single precision floating point values.
  • Int32x4, 4 32-bit integer values.

Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction with no overhead. This is similar to C and C++ intrinsics. Benchmarks for 4×4matrix multiplication, 3D vertex transformation, and Mandelbrot set visualization show near 400% speedup compared to scalar code written in Dart.

McCutchan's work on Dart has been adopted by ECMAScript and Intel announced at IDF 2013 that they are implementing McCutchan's specification for both V8 and SpiderMonkey.

Emscripten, Mozilla’s C/C++-to-JavaScript compiler, with extensions[10] can enable compilation of C++ programs that make use of SIMD intrinsics or gcc style vector code to SIMD API of JavaScript resulting in equivalent speedups compared to scalar code.

Commercial applications[edit]

Though it has generally proven difficult to find sustainable commercial applications for SIMD-only processors, one that has had some measure of success is the GAPP, which was developed by Lockheed Martin and taken to the commercial sector by their spin-off Teranex. The GAPP's recent incarnations have become a powerful tool in real-time video processing applications like conversion between various video standards and frame rates (NTSC to/from PAL, NTSC to/from HDTV formats, etc.), deinterlacing, image noise reduction, adaptive video compression, and image enhancement.

A more ubiquitous application for SIMD is found in video games: nearly every modern video game console since 1998 has incorporated a SIMD processor somewhere in its architecture. The PlayStation 2 was unusual in that one of its vector-float units could function as an autonomous DSP executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.

One of the recent processors to use vector processing is the Cell Processor developed by IBM in cooperation with Toshiba and Sony. It uses a number of SIMD processors (a NUMA architecture, each with independent local store and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.

A recent advancement by Ziilabs was the production of an SIMD type processor which can be used on mobile devices, such as media players and mobile phones.[11]

Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. ClearSpeed's CSX600 (2004) has 96 cores each with 2 double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect Bill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.

See also[edit]

References[edit]

External links[edit]

Single instruction, multiple data
The ordinary tripling of four 8-bit numbers. The CPU loads one 8-bit number into R1, multiplies it with R2, and then saves the answer from R3 back to RAM. This process is repeated for each number.
The SIMD tripling of four 8-bit numbers. The CPU loads 4 numbers at once, multiplies them all in one SIMD-multiplication, and saves them all at once back to RAM. In theory, the speed up is about 75%.
  1. ^David A. Patterson and John L. Hennessey, Computer Organization and Design: the Hardware/Software Interface, 2nd Edition, Morgan Kaufmann Publishers, Inc., San Francisco, California, 1998, p.751
  2. ^"MIMD1 - XP/S, CM-5"(PDF). 
  3. ^Conte, G.; Tommesani, S.; Zanichelli, F. (2000). The long and winding road to high-performance image processing with MMX/SSE(PDF). Proc. IEEE Int'l Workshop on Computer Architectures for Machine Perception. 
  4. ^RE: SSE2 speed, showing how SSE2 is used to implement SHA hash algorithms
  5. ^Salsa20 speed; Salsa20 software, showing a stream cipher implemented using SSE2
  6. ^Subject: up to 1.4x RSA throughput using SSE2, showing RSA implemented using a non-SIMD SSE2 integer multiply instruction.
  7. ^"RyuJIT: The next-generation JIT compiler for .NET". 
  8. ^"The JIT finally proposed. JIT and SIMD are getting married". 
  9. ^https://www.dartlang.org/slides/2013/02/Bringing-SIMD-to-the-Web-via-Dart.pdf
  10. ^https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx3cG12cDIwMTV8Z3g6NTkzYWE2OGNlNDAyMTRjOQ
  11. ^ZiiLabs Corporate Website https://secure.ziilabs.com/products/processors/zms05.aspx

0 Replies to “Sse Intrinsics Comparison Essay”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *