Simple-V (Parallelism Extension Proposal) Specification

  • Status: DRAFTv0.2
  • Last edited: 17 oct 2018
  • Ancillary resource: opcodes

With thanks to:

  • Allen Baum
  • Jacob Bachmeyer
  • Guy Lemurieux
  • Jacob Lifshay
  • The RISC-V Founders, without whom this all would not be possible.

Summary and Background: Rationale

Simple-V is a uniform parallelism API for RISC-V hardware that has several unplanned side-effects including code-size reduction, expansion of HINT space and more. The reason for creating it is to provide a manageable way to turn a pre-existing design into a parallel one, in a step-by-step incremental fashion, allowing the implementor to focus on adding hardware where it is needed and necessary.

Critically: No new instructions are added. The parallelism (if any is implemented) is implicitly added by tagging standard scalar registers for redirection. When such a tagged register is used in any instruction, it indicates that the PC shall not be incremented; instead a loop is activated where multiple instructions are issued to the pipeline (as determined by a length CSR), with contiguously incrementing register numbers starting from the tagged register. When the last "element" has been reached, only then is the PC permitted to move on. Thus Simple-V effectively sits (slots) in between the instruction decode phase and the ALU(s).

The barrier to entry with SV is therefore very low. The minimum compliantt implementation is software-emulation (traps), requiring only the CSRs and CSR tables, and that an exception be thrown if an instruction's registers are detected to have been tagged. The looping that would otherwise be done in hardware is thus carried out in software, instead. Whilst much slower, it is "compliant" with the SV specification, and may be suited for implementation in RV32E and also in situations where the implementor wishes to focus on certain aspects of SV, without unnecessary time and resources into the silicon, whilst also conforming strictly with the API. A good area to punt to software would be the polymorphic element width capability for example.

Hardware Parallelism, if any, is therefore added at the implementor's discretion to turn what would otherwise be a sequential loop into a parallel one.

To emphasise that clearly: Simple-V (SV) is not:

  • A SIMD system
  • A SIMT system
  • A Vectorisation Microarchitecture
  • A microarchitecture of any specific kind
  • A mandary parallel processor microarchitecture of any kind
  • A supercomputer extension

SV does not tell implementors how or even if they should implement parallelism: it is a hardware "API" (Application Programming Interface) that, if implemented, presents a uniform and consistent way to express parallelism, at the same time leaving the choice of if, how, how much, when and whether to parallelise operations entirely to the implementor.


For U-Mode there are two CSR key-value stores needed to create lookup tables which are used at the register decode phase.

  • A register CSR key-value table (typically 8 32-bit CSRs of 2 16-bits each)
  • A predication CSR key-value table (again, 8 32-bit CSRs of 2 16-bits each)
  • Small U-Mode and S-Mode register and predication CSR key-value tables (2 32-bit CSRs of 2x 16-bit entries each).
  • An optional "reshaping" CSR key-value table which remaps from a 1D linear shape to 2D or 3D, including full transposition.

There are also four additional CSRs for User-Mode:

  • CFG subsets the CSR tables
  • MVL (the Maximum Vector Length)
  • VL (which has different characteristics from standard CSRs)
  • STATE (useful for saving and restoring during context switch, and for providing fast transitions)

There are also three additional CSRs for Supervisor-Mode:

  • SMVL
  • SVL

And likewise for M-Mode:

  • MMVL
  • MVL

Both Supervisor and M-Mode have their own (small) CSR register and predication tables of only 4 entries each.


This CSR may be used to switch between subsets of the CSR Register and Predication Tables: it is kept to 5 bits so that a single CSRRWI instruction can be used. A setting of all ones is reserved to indicate that SimpleV is disabled.

(4..3) (2...0)
size bank

Bank is 3 bits in size, and indicates the starting index of the CSR Register and Predication Table entries that are "enabled". Given that each CSR table row is 16 bits and contains 2 CAM entries each, there are only 8 CSRs to cover in each table, so 8 bits is sufficient.

Size is 2 bits. With the exception of when bank == 7 and size == 3, the number of elements enabled is taken by right-shifting 2 by size:

size elements
0 2
1 4
2 8
3 16

Given that there are 2 16-bit CAM entries per CSR table row, this may also be viewed as the number of CSR rows to enable, by raising size to the power of 2.


  • When bank = 0 and size = 3, SVREGCFG0 through to SVREGCFG7 are enabled, and SVPREDCFG0 through to SVPREGCFG7 are enabled.
  • When bank = 1 and size = 3, SVREGCFG1 through to SVREGCFG7 are enabled, and SVPREDCFG1 through to SVPREGCFG7 are enabled.
  • When bank = 3 and size = 0, SVREGCFG3 and SVPREDCFG3 are enabled.
  • When bank = 3 and size = 0, SVREGCFG3 and SVPREDCFG3 are enabled.
  • When bank = 7 and size = 1, SVREGCFG7 and SVPREDCFG7 are enabled.
  • When bank = 7 and size = 3, SimpleV is entirely disabled.

In this way it is possible to enable and disable SimpleV with a single instruction, and, furthermore, on context-switching the quantity of CSRs to be saved and restored is greatly reduced.


MAXVECTORLENGTH is the same concept as MVL in RVV, except that it is variable length and may be dynamically set. MVL is however limited to the regfile bitwidth XLEN (1-32 for RV32, 1-64 for RV64 and so on).

The reason for setting this limit is so that predication registers, when marked as such, may fit into a single register as opposed to fanning out over several registers. This keeps the implementation a little simpler.

The other important factor to note is that the actual MVL is offset by one, so that it can fit into only 6 bits (for RV64) and still cover a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually means that MVL==1. When setting the MVL CSR to 3, this actually means that MVL==4, and so on. This is expressed more clearly in the "pseudocode" section, where there are subtle differences between CSRRW and CSRRWI.

Vector Length (VL)

VSETVL is slightly different from RVV. Like RVV, VL is set to be within the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)

VL = rd = MIN(vlen, MVL)

where 1 <= MVL <= XLEN

However just like MVL it is important to note that the range for VL has subtle design implications, covered in the "CSR pseudocode" section

The fixed (specific) setting of VL allows vector LOAD/STORE to be used to switch the entire bank of registers using a single instruction (see Appendix, "Context Switch Example"). The reason for limiting VL to XLEN is down to the fact that predication bits fit into a single register of length XLEN bits.

The second change is that when VSETVL is requested to be stored into x0, it is ignored silently (VSETVL x0, x5)

The third and most important change is that, within the limits set by MVL, the value passed in must be set in VL (and in the destination register).

This has implication for the microarchitecture, as VL is required to be set (limits from MVL notwithstanding) to the actual value requested. RVV has the option to set VL to an arbitrary value that suits the conditions and the micro-architecture: SV does not permit this.

The reason is so that if SV is to be used for a context-switch or as a substitute for LOAD/STORE-Multiple, the operation can be done with only 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1}, single LD/ST operation). If VL does not get set to the register file length when VSETVL is called, then a software-loop would be needed. To avoid this need, VL must be set to exactly what is requested (limits notwithstanding).

Therefore, in turn, unlike RVV, implementors must provide pseudo-parallelism (using sequential loops in hardware) if actual hardware-parallelism in the ALUs is not deployed. A hybrid is also permitted (as used in Broadcom's VideoCore-IV) however this must be entirely transparent to the ISA.

The fourth change is that VSETVL is implemented as a CSR, where the behaviour of CSRRW (and CSRRWI) must be changed to specifically store the new value in the destination register, not the old value. Where context-load/save is to be implemented in the usual fashion by using a single CSRRW instruction to obtain the old value, the secondary CSR must be used (SVSTATE). This CSR behaves exactly as standard CSRs, and contains more than just VL.

One interesting side-effect of using CSRRWI to set VL is that this may be done with a single instruction, useful particularly for a context-load/save. There are however limitations: CSRWWI's immediate is limited to 0-31.


This is a standard CSR that contains sufficient information for a full context save/restore. It contains (and permits setting of) MVL, VL, CFG, the destination element offset of the current parallel instruction being executed, and, for twin-predication, the source element offset as well. Interestingly it may hypothetically also be used to make the immediately-following instruction to skip a certain number of elements, however the recommended method to do this is predication.

Setting destoffs and srcoffs is realistically intended for saving state so that exceptions (page faults in particular) may be serviced and the hardware-loop that was being executed at the time of the trap, from user-mode (or Supervisor-mode), may be returned to and continued from where it left off. The reason why this works is because setting User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and is entirely why M-Mode and S-Mode have their own STATE CSRs).

The format of the STATE CSR is as follows:

(28..26) (25..24) (23..18) (17..12) (11..6) (5...0)
size bank destoffs srcoffs vl maxvl

When setting this CSR, the following characteristics will be enforced:

  • MAXVL will be truncated (after offset) to be within the range 1 to XLEN
  • VL will be truncated (after offset) to be within the range 1 to MAXVL
  • srcoffs will be truncated to be within the range 0 to VL-1
  • destoffs will be truncated to be within the range 0 to VL-1

MVL, VL and CSR Pseudocode

The pseudo-code for get and set of VL and MVL are as follows:

set_mvl_csr(value, rd):
    regs[rd] = MVL
    MVL = MIN(value, MVL)

    regs[rd] = VL

set_vl_csr(value, rd):
    VL = MIN(value, MVL)
    regs[rd] = VL # yes returning the new value NOT the old CSR

    regs[rd] = VL

Note that where setting MVL behaves as a normal CSR, unlike standard CSR behaviour, setting VL will return the new value of VL not the old one.

For CSRRWI, the range of the immediate is restricted to 5 bits. In order to maximise the effectiveness, an immediate of 0 is used to set VL=1, an immediate of 1 is used to set VL=2 and so on:

    set_mvl_csr(value+1, x0)

    set_vl_csr(value+1, x0)

However for CSRRW the following pseudocide is used for MVL and VL, where setting the value to zero will cause an exception to be raised. The reason is that if VL or MVL are set to zero, the STATE CSR is not capable of returning that value.

CSRRW_Set_MVL(rs1, rd):
    value = regs[rs1]
    if value == 0:
        raise Exception
    set_mvl_csr(value, rd)

CSRRW_Set_VL(rs1, rd):
    value = regs[rs1]
    if value == 0:
        raise Exception
    set_vl_csr(value, rd)

In this way, when CSRRW is utilised with a loop variable, the value that goes into VL (and into the destination register) may be used in an instruction-minimal fashion:

 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
 CSRRWI MVL, 3          # sets MVL == **4** (not 3)
 j zerotest             # in case loop counter a0 already 0
 CSRRW VL, t0, a0       # vl = t0 = min(mvl, a0)
 ld     a3, a1          # load 4 registers a3-6 from x
 slli   t1, t0, 3       # t1 = vl * 8 (in bytes)
 ld     a7, a2          # load 4 registers a7-10 from y
 add    a1, a1, t1      # increment pointer to x by vl*8
 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
 sub    a0, a0, t0      # n -= vl (t0)
 st     a7, a2          # store 4 registers a7-10 to y
 add    a2, a2, t1      # increment pointer to y by vl*8
 bnez   a0, loop        # repeat if n != 0

With the STATE CSR, just like with CSRRWI, in order to maximise the utilisation of the limited bitspace, "000000" in binary represents VL==1, "00001" represents VL==2 and so on (likewise for MVL):

CSRRW_Set_SV_STATE(rs1, rd):
    value = regs[rs1]
    MVL = set_mvl_csr(value[11:6]+1)
    VL = set_vl_csr(value[5:0]+1)
    CFG = value[28:24]>>24
    destoffs = value[23:18]>>18
    srcoffs = value[23:18]>>12

    regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
               (destoffs)<<18 | (CFG)<<24
    return regs[rd]

In both cases, whilst CSR read of VL and MVL return the exact values of VL and MVL respectively, reading and writing the STATE CSR returns those values minus one. This is absolutely critical to implement if the STATE CSR is to be used for fast context-switching.

Register CSR key-value (CAM) table

The purpose of the Register CSR table is four-fold:

  • To mark integer and floating-point registers as requiring "redirection" if it is ever used as a source or destination in any given operation. This involves a level of indirection through a 5-to-7-bit lookup table, such that unmodified operands with 5 bit (3 for Compressed) may access up to 64 registers.
  • To indicate whether, after redirection through the lookup table, the register is a vector (or remains a scalar).
  • To over-ride the implicit or explicit bitwidth that the operation would normally give the register.
RgCSR 15 (14..8) 7 (6..5) (4..0)
0 isvec0 regidx0 i/f vew0 regkey
1 isvec1 regidx1 i/f vew1 regkey
.. isvec.. regidx.. i/f vew.. regkey
15 isvec15 regidx15 i/f vew15 regkey

i/f is set to "1" to indicate that the redirection/tag entry is to be applied to integer registers; 0 indicates that it is relevant to floating-point registers. vew has the following meanings, indicating that the instruction's operand size is "over-ridden" in a polymorphic fashion:

vew bitwidth
00 default
01 default/2
10 default*2
11 8

As the above table is a CAM (key-value store) it may be appropriate (faster, implementation-wise) to expand it as follows:

struct vectorised fp_vec[32], int_vec[32];

for (i = 0; i < 16; i++) // 16 CSRs?
   tb = int_vec if CSRvec[i].type == 0 else fp_vec
   idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
   tb[idx].elwidth  = CSRvec[i].elwidth
   tb[idx].regidx   = CSRvec[i].regidx  // indirection
   tb[idx].isvector = CSRvec[i].isvector // 0=scalar
   tb[idx].packed   = CSRvec[i].packed  // SIMD or not

The actual size of the CSR Register table depends on the platform and on whether other Extensions are present (RV64G, RV32E, etc.). For details see "Subsets" section.

16-bit CSR Register CAM entries are mapped directly into 32-bit on any RV32-based system, however RV64 (XLEN=64) and RV128 (XLEN=128) are slightly different: the 16-bit entries appear (and can be set) multiple times, in an overlapping fashion. Here is the table for RV64:

| CSR# | 63..48 | 47..32 | 31..16 | 15..0 | | 0x4c0 | RgCSR3 | RgCSR2 | RgCSR1 | RgCSR0 | | 0x4c1 | RgCSR5 | RgCSR4 | RgCSR3 | RgCSR2 | | 0x4c2 | ... | ... | ... | ... | | 0x4c1 | RgCSR15 | RgCSR14 | RgCSR13 | RgCSR12 | | 0x4c8 | n/a | n/a | RgCSR15 | RgCSR4 |

The rules for writing to these CSRs are that any entries above the ones being set will be automatically wiped (to zero), so to fill several entries they must be written in a sequentially increasing manner. This functionality was in an early draft of RVV and it means that, firstly, compilers do not have to spend time zero-ing out CSRs unnecessarily, and secondly, that on context-switching (and function calls) the number of CSRs that may need saving is implicitly known.

The reason for the overlapping entries is that in the worst-case on an RV64 system, only 4 64-bit CSR reads/writes are required for a full context-switch (and an RV128 system, only 2 128-bit CSR reads/writes).


TODO: move elsewhere

# TODO: use elsewhere (retire for now)
vew = CSRbitwidth[rs1]
if (vew == 0)
    bytesperreg = (XLEN/8) # or FLEN as appropriate
elif (vew == 1)
    bytesperreg = (XLEN/4) # or FLEN/2 as appropriate
    bytesperreg = bytestable[vew] # 8 or 16
simdmult = (XLEN/8) / bytesperreg # or FLEN as appropriate
vlen = CSRvectorlen[rs1] * simdmult
CSRvlength = MIN(MIN(vlen, MAXVECTORLENGTH), rs2)

The reason for multiplying the vector length by the number of SIMD elements (in each individual register) is so that each SIMD element may optionally be predicated.

An example of how to subdivide the register file when bitwidth != default is given in the section "Bitwidth Virtual Register Reordering".

Predication CSR

TODO: update CSR tables, now 7-bit for regidx

The Predication CSR is a key-value store indicating whether, if a given destination register (integer or floating-point) is referred to in an instruction, it is to be predicated. Tt is particularly important to note that the actual register used can be different from the one that is in the instruction, due to the redirection through the lookup table.

  • regidx is the actual register that in combination with the i/f flag, if that integer or floating-point register is referred to, results in the lookup table being referenced to find the predication mask to use on the operation in which that (regidx) register has been used
  • predidx (in combination with the bank bit in the future) is the actual register to be used for the predication mask. Note: in effect predidx is actually a 6-bit register address, as the bank bit is the MSB (and is nominally set to zero for now).
  • inv indicates that the predication mask bits are to be inverted prior to use without actually modifying the contents of the register itself.
  • zeroing is either 1 or 0, and if set to 1, the operation must place zeros in any element position where the predication mask is set to zero. If zeroing is set to 0, unpredicated elements must be left alone. Some microarchitectures may choose to interpret this as skipping the operation entirely. Others which wish to stick more closely to a SIMD architecture may choose instead to interpret unpredicated elements as an internal "copy element" operation (which would be necessary in SIMD microarchitectures that perform register-renaming)
  • "packed" indicates if the register is to be interpreted as SIMD i.e. containing multiple contiguous elements of size equal to "bitwidth". (Note: in earlier drafts this was in the Register CSR table. However after extending to 7 bits there was not enough space. To use "unpredicated" packed SIMD, set the predicate to x0 and set "invert". This has the effect of setting a predicate of all 1s)
PrCSR 13 12 11 10 (9..5) (4..0)
0 bank0 zero0 inv0 i/f regidx predkey
1 bank1 zero1 inv1 i/f regidx predkey
.. bank.. zero.. inv.. i/f regidx predkey
15 bank15 zero15 inv15 i/f regidx predkey

The Predication CSR Table is a key-value store, so implementation-wise it will be faster to turn the table around (maintain topologically equivalent state):

struct pred {
    bool zero;
    bool inv;
    bool enabled;
    int predidx; // redirection: actual int register to use

struct pred fp_pred_reg[32];   // 64 in future (bank=1)
struct pred int_pred_reg[32];  // 64 in future (bank=1)

for (i = 0; i < 16; i++)
  tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
  idx = CSRpred[i].regidx
  tb[idx].zero = CSRpred[i].zero
  tb[idx].inv  = CSRpred[i].inv
  tb[idx].predidx  = CSRpred[i].predidx
  tb[idx].enabled  = true

So when an operation is to be predicated, it is the internal state that is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following pseudo-code for operations is given, where p is the explicit (direct) reference to the predication register to be used:

for (int i=0; i<vl; ++i)
    if ([!]preg[p][i])
       (d ? vreg[rd][i] : sreg[rd]) =
        iop(s1 ? vreg[rs1][i] : sreg[rs1],
            s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs

This instead becomes an indirect reference using the internal state table generated from the Predication CSR key-value store, which iwws used as follows.

if type(iop) == INT:
    preg = int_pred_reg[rd]
    preg = fp_pred_reg[rd]

for (int i=0; i<vl; ++i)
    predicate, zeroing = get_pred_val(type(iop) == INT, rd):
    if (predicate && (1<<i))
       (d ? regfile[rd+i] : regfile[rd]) =
        iop(s1 ? regfile[rs1+i] : regfile[rs1],
            s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
    else if (zeroing)
       (d ? regfile[rd+i] : regfile[rd]) = 0


  • d, s1 and s2 are booleans indicating whether destination, source1 and source2 are vector or scalar
  • key-value CSR-redirection of rd, rs1 and rs2 have NOT been included above, for clarity. rd, rs1 and rs2 all also must ALSO go through register-level redirection (from the Register CSR table) if they are vectors.

If written as a function, obtaining the predication mask (and whether zeroing takes place) may be done as follows:

def get_pred_val(bool is_fp_op, int reg):
   tb = int_reg if is_fp_op else fp_reg
   if (!tb[reg].enabled):
      return ~0x0, False       // all enabled; no zeroing
   tb = int_pred if is_fp_op else fp_pred
   if (!tb[reg].enabled):
      return ~0x0, False       // all enabled; no zeroing
   predidx = tb[reg].predidx   // redirection occurs HERE
   predicate = intreg[predidx] // actual predicate HERE
   if (tb[reg].inv):
      predicate = ~predicate   // invert ALL bits
   return predicate, tb[reg].zero

Note here, critically, that only if the register is marked in its CSR register table entry as being "active" does the testing proceed further to check if the CSR predicate table entry is also active.

Note also that this is in direct contrast to branch operations for the storage of comparisions: in these specific circumstances the requirement for there to be an active CSR register entry is removed.


(Note: both the REMAP and SHAPE sections are best read after the rest of the document has been read)

There is one 32-bit CSR which may be used to indicate which registers, if used in any operation, must be "reshaped" (re-mapped) from a linear form to a 2D or 3D transposed form. The 32-bit REMAP CSR may reshape up to 3 registers:

29..28 27..26 25..24 23 22..16 15 14..8 7 6..0
shape2 shape1 shape0 0 regidx2 0 regidx1 0 regidx0

regidx0-2 refer not to the Register CSR CAM entry but to the underlying real register (see regidx, the value) and consequently is 7-bits wide. shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved. Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.

SHAPE 1D/2D/3D vector-matrix remapping CSRs

(Note: both the REMAP and SHAPE sections are best read after the rest of the document has been read)

There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each, which have the same format. When each SHAPE CSR is set entirely to zeros, remapping is disabled: the register's elements are a linear (1D) vector.

26..24 23 22..16 15 14..8 7 6..0
permute 0 zdimsz 0 ydimsz 0 xdimsz

xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates that the array dimensionality for that dimension is 1. A value of xdimsz=2 would indicate that in the first dimension there are 3 elements in the array. The format of the array is therefore as follows:


However whilst illustrative of the dimensionality, that does not take the "permute" setting into account. "permute" may be any one of six values (0-5, with values of 6 and 7 being reserved, and not legal). The table below shows how the permutation dimensionality order works:

permute order array format
000 0,1,2 (xdim+1)(ydim+1)(zdim+1)
001 0,2,1 (xdim+1)(zdim+1)(ydim+1)
010 1,0,2 (ydim+1)(xdim+1)(zdim+1)
011 1,2,0 (ydim+1)(zdim+1)(xdim+1)
100 2,0,1 (zdim+1)(xdim+1)(ydim+1)
101 2,1,0 (zdim+1)(ydim+1)(xdim+1)

In other words, the "permute" option changes the order in which nested for-loops over the array would be done. The algorithm below shows this more clearly, and may be executed as a python program:

# mapidx = REMAP.shape2
xdim = 3 # SHAPE[mapidx].xdim_sz+1
ydim = 4 # SHAPE[mapidx].ydim_sz+1
zdim = 5 # SHAPE[mapidx].zdim_sz+1

lims = [xdim, ydim, zdim]
idxs = [0,0,0] # starting indices
order = [1,0,2] # experiment with different permutations, here

for idx in range(xdim * ydim * zdim):
    new_idx = idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
    print new_idx,
    for i in range(3):
        idxs[order[i]] = idxs[order[i]] + 1
        if (idxs[order[i]] != lims[order[i]]):
        idxs[order[i]] = 0

Here, it is assumed that this algorithm be run within all pseudo-code throughout this document where a (parallelism) for-loop would normally run from 0 to VL-1 to refer to contiguous register elements; instead, where REMAP indicates to do so, the element index is run through the above algorithm to work out the actual element index, instead. Given that there are three possible SHAPE entries, up to three separate registers in any given operation may be simultaneously remapped:

function op_add(rd, rs1, rs2) # add not VADD!
  for (i = 0; i < VL; i++)
    if (predval & 1<<i) # predication uses intregs
       ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
    if (int_vec[rd ].isvector)  { id += 1; }
    if (int_vec[rs1].isvector)  { irs1 += 1; }
    if (int_vec[rs2].isvector)  { irs2 += 1; }

By changing remappings, 2D matrices may be transposed "in-place" for one operation, followed by setting a different permutation order without having to move the values in the registers to or from memory. Also, the reason for having REMAP separate from the three SHAPE CSRs is so that in a chain of matrix multiplications and additions, for example, the SHAPE CSRs need only be set up once; only the REMAP CSR need be changed to target different registers.

Note that:

  • If permute option 000 is utilised, the actual order of the reindexing does not change!
  • If two or more dimensions are set to zero, the actual order does not change!
  • The above algorithm is pseudo-code only. Actual implementations will need to take into account the fact that the element for-looping must be re-entrant, due to the possibility of exceptions occurring. See MSTATE CSR, which records the current element index.
  • Twin-predicated operations require two separate and distinct element offsets. The above pseudo-code algorithm will be applied separately and independently to each, should each of the two operands be remapped. This even includes C.LDSP and other operations in that category, where in that case it will be the offset that is remapped (see Compressed Stack LOAD/STORE section).
  • Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to less than MVL is perfectly legal, albeit very obscure. It permits entries to be regularly presented to operands more than once, thus allowing the same underlying registers to act as an accumulator of multiple vector or matrix operations, for example.

Clearly here some considerable care needs to be taken as the remapping could hypothetically create arithmetic operations that target the exact same underlying registers, resulting in data corruption due to pipeline overlaps. Out-of-order / Superscalar micro-architectures with register-renaming will have an easier time dealing with this than DSP-style SIMD micro-architectures.

Instruction Execution Order

Simple-V behaves as if it is a hardware-level "macro expansion system", substituting and expanding a single instruction into multiple sequential instructions with contiguous and sequentially-incrementing registers. As such, it does not modify - or specify - the behaviour and semantics of the execution order: that may be deduced from the existing RV specification in each and every case.

So for example if a particular micro-architecture permits out-of-order execution, and it is augmented with Simple-V, then wherever instructions may be out-of-order then so may the "post-expansion" SV ones.

If on the other hand there are memory guarantees which specifically prevent and prohibit certain instructions from being re-ordered (such as the Atomicity Axiom, or FENCE constraints), then clearly those constraints MUST also be obeyed "post-expansion".

It should be absolutely clear that SV is not about providing new functionality or changing the existing behaviour of a micro-architetural design, or about changing the RISC-V Specification. It is purely about compacting what would otherwise be contiguous instructions that use sequentially-increasing register numbers down to the one instruction.


Despite being a 98% complete and accurate topological remap of RVV concepts and functionality, no new instructions are needed. Compared to RVV: All RVV instructions can be re-mapped, however xBitManip becomes a critical dependency for efficient manipulation of predication masks (as a bit-field). Despite the removal of all operations, with the exception of CLIP and VSELECT.X all instructions from RVV Base are topologically re-mapped and retain their complete functionality, intact. Note that if RV64G ever had a MV.X added as well as FCLIP, the full functionality of RVV-Base would be obtained in SV.

Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard equivalents, so are left out of Simple-V. VSELECT could be included if there existed a MV.X instruction in RV (MV.X is a hypothetical non-immediate variant of MV that would allow another register to specify which register was to be copied). Note that if any of these three instructions are added to any given RV extension, their functionality will be inherently parallelised.

With some exceptions, where it does not make sense or is simply too challenging, all RV-Base instructions are parallelised:

  • CSR instructions, whilst a case could be made for fast-polling of a CSR into multiple registers, would require guarantees of strict sequential ordering that SV does not provide. Therefore, CSRs are not really suitable and are left out.
  • LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are left as scalar.
  • LR/SC could hypothetically be parallelised however their purpose is single (complex) atomic memory operations where the LR must be followed up by a matching SC. A sequence of parallel LR instructions followed by a sequence of parallel SC instructions therefore is guaranteed to not be useful. Not least: the guarantees of LR/SC would be impossible to provide if emulated in a trap.
  • EBREAK, NOP, FENCE and others do not use registers so are not inherently paralleliseable anyway.

All other operations using registers are automatically parallelised. This includes AMOMAX, AMOSWAP and so on, where particular care and attention must be paid.

Example pseudo-code for an integer ADD operation (including scalar operations). Floating-point uses fp csrs.

function op_add(rd, rs1, rs2) # add not VADD!
  int i, id=0, irs1=0, irs2=0;
  predval = get_pred_val(FALSE, rd);
  rd  = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
  for (i = 0; i < VL; i++)
    if (predval & 1<<i) # predication uses intregs
       ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
    if (int_vec[rd ].isvector)  { id += 1; }
    if (int_vec[rs1].isvector)  { irs1 += 1; }
    if (int_vec[rs2].isvector)  { irs2 += 1; }

Instruction Format

There are no operations added to SV, at all. Instead SV overloads pre-existing branch operations into predicated variants, and implicitly overloads arithmetic operations, MV, FCVT, and LOAD/STORE depending on CSR configurations for bitwidth and predication. Everything becomes parallelised. This includes Compressed instructions as well as any future instructions and Custom Extensions.

Branch Instructions

Standard Branch

Branch operations use standard RV opcodes that are reinterpreted to be "predicate variants" in the instance where either of the two src registers are marked as vectors (active=1, vector=1).

Note that he predication register to use (if one is enabled) is taken from the first src register. The target (destination) predication register to use (if one is enabled) is taken from the second src register.

If either of src1 or src2 are scalars (whether by there being no CSR register entry or whether by the CSR entry specifically marking the register as "scalar") the comparison goes ahead as vector-scalar or scalar-vector.

In instances where no vectorisation is detected on either src registers the operation is treated as an absolutely standard scalar branch operation. Where vectorisation is present on either or both src registers, the branch may stil go ahead if any only if all tests succeed (i.e. excluding those tests that are predicated out).

Note that just as with the standard (scalar, non-predicated) branch operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting src1 and src2.

In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given for predicated compare operations of function "cmp":

for (int i=0; i<vl; ++i)
  if ([!]preg[p][i])
     preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
                       s2 ? vreg[rs2][i] : sreg[rs2]);

With associated predication, vector-length adjustments and so on, and temporarily ignoring bitwidth (which makes the comparisons more complex), this becomes:

s1 = reg_is_vectorised(src1);
s2 = reg_is_vectorised(src2);

if not s1 && not s2
    if cmp(rs1, rs2) # scalar compare
        goto branch

preg = int_pred_reg[rd]
reg = int_regfile

ps = get_pred_val(I/F==INT, rs1);
rd = get_pred_val(I/F==INT, rs2); # this may not exist

if not exists(rd)
    temporary_result = 0
    preg[rd] = 0; # initialise to zero

for (int i = 0; i < VL; ++i)
  if (ps & (1<<i)) && (cmp(s1 ? reg[src1+i]:reg[src1],
                           s2 ? reg[src2+i]:reg[src2])
      if not exists(rd)
          temporary_result |= 1<<i;
          preg[rd] |= 1<<i;  # bitfield not vector

 if not exists(rd)
    if temporary_result == ps
        goto branch
    if preg[rd] == ps
        goto branch


  • zeroing has been temporarily left out of the above pseudo-code, for clarity
  • Predicated SIMD comparisons would break src1 and src2 further down into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register Reordering") setting Vector-Length times (number of SIMD elements) bits in Predicate Register rd, as opposed to just Vector-Length bits.

TODO: predication now taken from src2. also branch goes ahead if all compares are successful.

Note also that where normally, predication requires that there must also be a CSR register entry for the register being used in order for the predication CSR register entry to also be active, for branches this is not the case. src2 does not have to have its CSR register entry marked as active in order for predication on src2 to be active.

Floating-point Comparisons

There does not exist floating-point branch operations, only compare. Interestingly no change is needed to the instruction format because FP Compare already stores a 1 or a zero in its "rd" integer register target, i.e. it's not actually a Branch at all: it's a compare. Thus, no change is made to the floating-point comparison, so

It is however noted that an entry "FNE" (the opposite of FEQ) is missing, and whilst in ordinary branch code this is fine because the standard RVF compare can always be followed up with an integer BEQ or a BNE (or a compressed comparison to zero or non-zero), in predication terms that becomes more of an impact. To deal with this, SV's predication has had "invert" added to it.

Compressed Branch Instruction

Compressed Branch instructions are, just like standard Branch instructions, reinterpreted to be vectorised and predicated based on the source register (rs1s) CSR entries. As however there is only the one source register, given that c.beqz a10 is equivalent to beqz a10,x0, the optional target to store the results of the comparisions is taken from CSR predication table entries for x0.

The specific required use of x0 is, with a little thought, quite obvious, but is counterintuitive. Clearly it is not recommended to redirect x0 with a CSR register entry, however as a means to opaquely obtain a predication target it is the only sensible option that does not involve additional special CSRs (or, worse, additional special opcodes).

Note also that, just as with standard branches, the 2nd source (in this case x0 rather than src2) does not have to have its CSR register table marked as "active" in order for predication to work.

Vectorised Dual-operand instructions

There is a series of 2-operand instructions involving copying (and sometimes alteration):

  • C.MV
  • LOAD(-FP) and STORE(-FP)

All of these operations follow the same two-operand pattern, so it is both the source and destination predication masks that are taken into account. This is different from the three-operand arithmetic instructions, where the predication mask is taken from the destination register, and applied uniformly to the elements of the source register(s), element-for-element.

The pseudo-code pattern for twin-predicated operations is as follows:

function op(rd, rs):
  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
  ps = get_pred_val(FALSE, rs); # predication on src
  pd = get_pred_val(FALSE, rd); # ... AND on dest
  for (int i = 0, int j = 0; i < VL && j < VL;):
    if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
    if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
    reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
    if (int_csr[rs].isvec) i++;
    if (int_csr[rd].isvec) j++;

This pattern covers scalar-scalar, scalar-vector, vector-scalar and vector-vector, and predicated variants of all of those. Zeroing is not presently included (TODO). As such, when compared to RVV, the twin-predicated variants of C.MV and FMV cover all standard vector operations: VINSERT, VSPLAT, VREDUCE, VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.

Note that:

  • elwidth (SIMD) is not covered in the pseudo-code above
  • ending the loop early in scalar cases (VINSERT, VEXTRACT) is also not covered
  • zero predication is also not shown (TODO).

C.MV Instruction

There is no MV instruction in RV however there is a C.MV instruction. It is used for copying integer-to-integer registers (vectorised FMV is used for copying floating-point).

If either the source or the destination register are marked as vectors C.MV is reinterpreted to be a vectorised (multi-register) predicated move operation. The actual instruction's format does not change:

15 12 11 7 6 2 1 0
funct4 rd rs op
4 5 5 2
C.MV dest src C0

A simplified version of the pseudocode for this operation is as follows:

function op_mv(rd, rs) # MV not VMV!
  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
  ps = get_pred_val(FALSE, rs); # predication on src
  pd = get_pred_val(FALSE, rd); # ... AND on dest
  for (int i = 0, int j = 0; i < VL && j < VL;):
    if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
    if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
    ireg[rd+j] <= ireg[rs+i];
    if (int_csr[rs].isvec) i++;
    if (int_csr[rd].isvec) j++;

There are several different instructions from RVV that are covered by this one opcode:

src dest predication op
scalar vector none VSPLAT
scalar vector destination sparse VSPLAT
scalar vector 1-bit dest VINSERT
vector scalar 1-bit? src VEXTRACT
vector vector none VCOPY
vector vector src Vector Gather
vector vector dest Vector Scatter
vector vector src & dest Gather/Scatter
vector vector src == dest sparse VCOPY

Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV operations with inversion on the src and dest predication for one of the two C.MV operations.

Note that in the instance where the Compressed Extension is not implemented, MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs. Note that the behaviour is different from C.MV because with addi the predication mask to use is taken only from rd and is applied against all elements: rs[i] = rd[i].

FMV, FNEG and FABS Instructions

These are identical in form to C.MV, except covering floating-point register copying. The same double-predication rules also apply. However when elwidth is not set to default the instruction is implicitly and automatic converted to a (vectorised) floating-point type conversion operation of the appropriate size covering the source and destination register bitwidths.

(Note that FMV, FNEG and FABS are all actually pseudo-instructions)

FVCT Instructions

These are again identical in form to C.MV, except that they cover floating-point to integer and integer to floating-point. When element width in each vector is set to default, the instructions behave exactly as they are defined for standard RV (scalar) operations, except vectorised in exactly the same fashion as outlined in C.MV.

However when the source or destination element width is not set to default, the opcode's explicit element widths are over-ridden to new definitions, and the opcode's element width is taken as indicative of the SIMD width (if applicable i.e. if packed SIMD is requested) instead.

For example FCVT.S.L would normally be used to convert a 64-bit integer in register rs1 to a 64-bit floating-point number in rd. If however the source rs1 is set to be a vector, where elwidth is set to default/2 and "packed SIMD" is enabled, then the first 32 bits of rs1 are converted to a floating-point number to be stored in rd's first element and the higher 32-bits also converted to floating-point and stored in the second. The 32 bit size comes from the fact that FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to divide that by two it means that rs1 element width is to be taken as 32.

Similar rules apply to the destination register.

LOAD / STORE Instructions and LOAD-FP/STORE-FP

An earlier draft of SV modified the behaviour of LOAD/STORE. This actually undermined the fundamental principle of SV, namely that there be no modifications to the scalar behaviour (except where absolutely necessary), in order to simplify an implementor's task if considering converting a pre-existing scalar design to support parallelism.

So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality do not change in SV, however just as with C.MV it is important to note that dual-predication is possible. Using the template outlined in the section "Vectorised dual-op instructions", the pseudo-code covering scalar-scalar, scalar-vector, vector-scalar and vector-vector applies, where SCALAR_OPERATION is as follows, exactly as for a standard scalar RV LOAD operation:

    srcbase = ireg[rs+i];
    return mem[srcbase + imm];

Whilst LOAD and STORE remain as-is when compared to their scalar counterparts, the incrementing on the source register (for LOAD) means that pointers-to-structures can be easily implemented, and if contiguous offsets are required, those pointers (the contents of the contiguous source registers) may simply be set up to point to contiguous locations.

Compressed Stack LOAD / STORE Instructions

C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated, where it is implicit in C.LWSP/FLWSP that x2 is the source register. It is therefore possible to use predicated C.LWSP to efficiently pop registers off the stack (by predicating x2 as the source), cherry-picking which registers to store to (by predicating the destination). Likewise for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.

However, to do so, the behaviour of C.LWSP/C.SWSP needs to be slightly different: where x2 is marked as vectorised, instead of incrementing the register on each loop (x2, x3, x4...), instead it is the immediate that must be incremented. Pseudo-code follows:

function lwsp(rd, rs):
  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
  rs = x2 # effectively no redirection on x2.
  ps = get_pred_val(FALSE, rs); # predication on src
  pd = get_pred_val(FALSE, rd); # ... AND on dest
  for (int i = 0, int j = 0; i < VL && j < VL;):
    if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
    if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
    reg[rd+j] = mem[x2 + ((offset+i) * 4)]
    if (int_csr[rs].isvec) i++;
    if (int_csr[rd].isvec) j++;

For C.LDSP, the offset (and loop) multiplier would be 8, and for C.LQSP it would be 16. Effectively this makes C.LWSP etc. a Vector "Unit Stride" Load instruction.

Note: it is still possible to redirect x2 to an alternative target register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as general-purpose Vector "Unit Stride" LOAD/STORE operations.

Compressed LOAD / STORE Instructions

Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE, where the same rules apply and the same pseudo-code apply as for non-compressed LOAD/STORE. This is different from Compressed Stack LOAD/STORE (C.LWSP / C.SWSP), which have been augmented to become Vector "Unit Stride" capable.

Just as with uncompressed LOAD/STORE C.LD / C.ST increment the register during the hardware loop, not the offset.

Element bitwidth polymorphism

Element bitwidth is best covered as its own special section, as it is quite involved and applies uniformly across-the-board. SV restricts bitwidth polymorphism to default, default/2, default*2 and 8-bit (whilst this seems limiting, the justification is covered in a later sub-section).

The effect of setting an element bitwidth is to re-cast each entry in the register table, and for all memory operations involving load/stores of certain specific sizes, to a completely different width. Thus In c-style terms, on an RV64 architecture, effectively each register now looks like this:

typedef union {
    uint8_t  b[8];
    uint16_t s[4];
    uint32_t i[2];
    uint64_t l[1];
} reg_t;

// integer table: assume maximum SV 7-bit regfile size
reg_t int_regfile[128];

where the CSR Register table entry (not the instruction alone) determines which of those union entries is to be used on each operation, and the VL element offset in the hardware-loop specifies the index into each array.

However a naive interpretation of the data structure above masks the fact that setting VL greater than 8, for example, when the bitwidth is 8, accessing one specific register "spills over" to the following parts of the register file in a sequential fashion. So a much more accurate way to reflect this would be:

typedef union {
    uint8_t  actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
    uint8_t  b[0]; // array of type uint8_t
    uint16_t s[0];
    uint32_t i[0];
    uint64_t l[0];
    uint128_t d[0];
} reg_t;

reg_t int_regfile[128];

where when accessing any individual regfile[n].b entry it is permitted (in c) to arbitrarily over-run the declared length of the array (zero), and thus "overspill" to consecutive register file entries in a fashion that is completely transparent to a greatly-simplified software / pseudo-code representation. It is however critical to note that it is clearly the responsibility of the implementor to ensure that, towards the end of the register file, an exception is thrown if attempts to access beyond the "real" register bytes is ever attempted.

Now we may modify pseudo-code an operation where all element bitwidths have been set to the same size, where this pseudo-code is otherwise identical to its "non" polymorphic versions (above):

function op_add(rd, rs1, rs2) # add not VADD!
  for (i = 0; i < VL; i++)
       // TODO, calculate if over-run occurs, for each elwidth
       if (elwidth == 8) {
           int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
        } else if elwidth == 16 {
           int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
        } else if elwidth == 32 {
           int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
        } else { // elwidth == 64
           int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +

So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers following sequentially on respectively from the same) are "type-cast" to 8-bit; for 16-bit entries likewise and so on.

However that only covers the case where the element widths are the same. Where the element widths are different, the following algorithm applies:

  • Analyse the bitwidth of all source operands and work out the maximum. Record this as "maxsrcbitwidth"
  • If any given source operand requires sign-extension or zero-extension (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit sign-extension / zero-extension or whatever is specified in the standard RV specification, change that to sign-extending from the respective individual source operand's bitwidth from the CSR table out to "maxsrcbitwidth" (previously calculated), instead.
  • Following separate and distinct (optional) sign/zero-extension of all source operands as specifically required for that operation, carry out the operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV this may be a "null" (copy) operation, and that with FCVT, the changes to the source and destination bitwidths may also turn FVCT effectively into a copy).
  • If the destination operand requires sign-extension or zero-extension, instead of a mandatory fixed size (typically 32-bit for arithmetic, for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw etc.), overload the RV specification with the bitwidth from the destination register's elwidth entry.
  • Finally, store the (optionally) sign/zero-extended value into its destination: memory for sb/sw etc., or an offset section of the register file for an arithmetic operation.

In this way, polymorphic bitwidths are achieved without requiring a massive 64-way permutation of calculations per opcode, for example (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible rd bitwidths). The pseudo-code is therefore as follows:

typedef union {
    uint8_t  b;
    uint16_t s;
    uint32_t i;
    uint64_t l;
} el_reg_t;

    if elwidth == 0:
        return xlen
    if elwidth == 1:
        return xlen / 2
    if elwidth == 2:
        return xlen * 2
    // elwidth == 3:
    return 8

get_max_elwidth(rs1, rs2):
    return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
               bw(int_csr[rs2].elwidth)) # again XLEN if no entry

get_polymorphed_reg(reg, bitwidth, offset):
    el_reg_t res;
    res.l = 0; // TODO: going to need sign-extending / zero-extending
    if bitwidth == 8:
        reg.b = int_regfile[reg].b[offset]
    elif bitwidth == 16:
        reg.s = int_regfile[reg].s[offset]
    elif bitwidth == 32:
        reg.i = int_regfile[reg].i[offset]
    elif bitwidth == 64:
        reg.l = int_regfile[reg].l[offset]
    return res

set_polymorphed_reg(reg, bitwidth, offset, val):
    if bitwidth == 8:
        int_regfile[reg].b[offset] = val
    elif bitwidth == 16:
        int_regfile[reg].s[offset] = val
        reg.s = int_regfile[reg].s[offset]
    elif bitwidth == 32:
        int_regfile[reg].i[offset] = val
    elif bitwidth == 64:
        int_regfile[reg].l[offset] = val

  maxsrcwid =  get_max_elwidth(rs1, rs2) # source element width(s)
  destwid = int_csr[rs1].elwidth         # destination element width
  for (i = 0; i < VL; i++)
    if (predval & 1<<i) # predication uses intregs
       // TODO, calculate if over-run occurs, for each elwidth
       src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
       // TODO, sign/zero-extend src1 and src2 as operation requires
       if (op_requires_sign_extend_src1)
          src1 = sign_extend(src1, maxsrcwid)
       src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
       result = src1 + src2 # actual add here
       // TODO, sign/zero-extend result, as operation requires
       if (op_requires_sign_extend_dest)
          result = sign_extend(result, maxsrcwid)
       set_polymorphed_reg(rd, destwid, ird, result)
    if (int_vec[rd ].isvector)  { id += 1; }
    if (int_vec[rs1].isvector)  { irs1 += 1; }
    if (int_vec[rs2].isvector)  { irs2 += 1; }

Whilst specific sign-extension and zero-extension pseudocode calls are left out, due to each operation being different, the above should be clear that;

  • the source operands are extended out to the maximum bitwidth of all source operands
  • the operation takes place at that maximum source bitwidth
  • the result is extended (or potentially even, truncated) before being stored in the destination. i.e. truncation (if required) to the destination width occurs after the operation not before.

Polymorphic floating-point operation exceptions and error-handling

For floating-point operations, conversion takes place without raising any kind of exception. Exactly as specified in the standard RV specification, NAN (or appropriate) is stored if the result is beyond the range of the destination, and, again, exactly as with the standard RV specification just as with scalar operations, the floating-point flag is raised (FCSR). And, again, just as with scalar operations, it is software's responsibility to check this flag. Given that the FCSR flags are "accrued", the fact that multiple element operations could have occurred is not a problem.

Note that it is perfectly legitimate for floating-point bitwidths of only 8 to be specified. However whilst it is possible to apply IEEE 754 principles, no actual standard yet exists. Implementors wishing to provide hardware-level 8-bit support rather than throw a trap to emulate in software should contact the author of this specification before proceeding.

Polymorphic shift operators

A special note is needed for changing the element width of left and right shift operators, particularly right-shift. Even for standard RV base, in order for correct results to be returned, the second operand RS2 must be truncated to be within the range of RS1's bitwidth. spike's implementation of sll for example is as follows:

WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));

which means: where XLEN is 32 (for RV32), restrict RS2 to cover the range 0..31 so that RS1 will only be left-shifted by the amount that is possible to fit into a 32-bit register. Whilst this appears not to matter for hardware, it matters greatly in software implementations, and it also matters where an RV64 system is set to "RV32" mode, such that the underlying registers RS1 and RS2 comprise 64 hardware bits each.

For SV, where each operand's element bitwidth may be over-ridden, the rule about determining the operation's bitwidth still applies, being defined as the maximum bitwidth of RS1 and RS2. However, this rule also applies to the truncation of RS2. In other words, after determining the maximum bitwidth, RS2's range must also be truncated to ensure a correct answer. Example:

  • RS1 is over-ridden to a 16-bit width
  • RS2 is over-ridden to an 8-bit width
  • RD is over-ridden to a 64-bit width
  • the maximum bitwidth is thus determined to be 16-bit - max(8,16)
  • RS2 is truncated to a range of values from 0 to 15: RS2 & (16-1)

Pseudocode for this example would therefore be:

WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));

This example illustrates that considerable care therefore needs to be taken to ensure that left and right shift operations are implemented correctly.

Why SV bitwidth specification is restricted to 4 entries

The four entries for SV element bitwidths only allows three over-rides:

  • default bitwidth for a given operation divided by two
  • default bitwidth for a given operation multiplied by two
  • 8-bit

At first glance this seems completely inadequate: for example, RV64 cannot possibly operate on 16-bit operations, because 64 divided by 2 is 32. However, the reader may have forgotten that it is possible, at run-time, to switch a 64-bit application into 32-bit mode, by setting UXL. Once switched, opcodes that formerly had 64-bit meanings now have 32-bit meanings, and in this way, "default/2" now reaches 16-bit where previously it meant "32-bit".

There is however an absolutely crucial aspect oF SV here that explicitly needs spelling out, and it's whether the "vectorised" bit is set in the Register's CSR entry.

If "vectorised" is clear (not set), this indicates that the operation is "scalar". Under these circumstances, when set on a destination (RD), then sign-extension and zero-extension, whilst changed to match the override bitwidth (if set), will erase the full register entry (64-bit if RV64).

When vectorised is set, this indicates that the operation now treats elements as if they were independent registers, so regardless of the length, any parts of a given actual register that are not involved in the operation are NOT modified, but are PRESERVED.

SIMD micro-architectures may implement this by using predication on any elements in a given actual register that are beyond the end of multi-element operation.


  • rs1, rs2 and rd are all set to 8-bit
  • VL is set to 3
  • RV64 architecture is set (UXL=64)
  • add operation is carried out
  • bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16] concatenated with similar add operations on bits 15..8 and 7..0
  • bits 24 through 63 remain as they originally were.

Example SIMD micro-architectural implementation:

  • SIMD architecture works out the nearest round number of elements that would fit into a full RV64 register (in this case: 8)
  • SIMD architecture creates a hidden predicate, binary 0b00000111 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
  • SIMD architecture goes ahead with the add operation as if it was a full 8-wide batch of 8 adds
  • SIMD architecture passes top 5 elements through the adders (which are "disabled" due to zero-bit predication)
  • SIMD architecture gets the 5 unmodified top 8-bits back unmodified and stores them in rd.

This requires a read on rd, however this is required anyway in order to support non-zeroing mode.

Specific instruction walk-throughs

This section covers walk-throughs of the above-outlined procedure for converting standard RISC-V scalar arithmetic operations to polymorphic widths, to ensure that it is correct.


Standard Scalar RV32/RV64 (xlen):

  • RS1 @ xlen bits
  • RS2 @ xlen bits
  • add @ xlen bits
  • RD @ xlen bits

Polymorphic variant:

  • RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
  • RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
  • add @ max(rs1, rs2) bits
  • RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate

Note here that polymorphic add zero-extends its source operands, where addw sign-extends.


Standard Scalar RV64 (xlen):

  • RS1 @ xlen bits
  • RS2 @ xlen bits
  • add @ xlen bits
  • RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.

Polymorphic variant:

  • RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
  • RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
  • add @ max(rs1, rs2) bits
  • RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate

Note here that polymorphic addw sign-extends its source operands, where add zero-extends.

This requires a little more in-depth analysis. Where the bitwidth of rs1 equals the bitwidth of rs2, no sign-extending will occur. It is only where the bitwidth of either rs1 or rs2 are different, will the lesser-width operand be sign-extended.

Effectively however, both rs1 and rs2 are being sign-extended to the bitwidth of rd (or truncated), where for add they are both zero-extended.



Standard Scalar RV64I:

  • RS1 @ xlen bits, truncated to 32-bit
  • immed @ 12 bits, sign-extended to 32-bit
  • add @ 32 bits
  • RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.

Polymorphic variant:

  • RS1 @ rs1 bits
  • immed @ 12 bits, sign-extend to max(rs1, 12) bits
  • add @ max(rs1, 12) bits
  • RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate


TODO: expand. Exceptions may occur at any time, in any given underlying scalar operation. This implies that context-switching (traps) may occur, and operation must be returned to where it left off. That in turn implies that the full state - including the current parallel element being processed - has to be saved and restored. This is what the STATE CSR is for.

The implications are that all underlying individual scalar operations "issued" by the parallelisation have to appear to be executed sequentially. The further implications are that if two or more individual element operations are underway, and one with an earlier index causes an exception, it may be necessary for the microarchitecture to discard or terminate operations with higher indices.

This being somewhat dissatisfactory, an "opaque predication" variant of the STATE CSR is being considered.


A "HINT" is an operation that has no effect on architectural state, where its use may, by agreed convention, give advance notification to the microarchitecture: branch prediction notification would be a good example. Usually HINTs are where rd=x0.

With Simple-V being capable of issuing parallel instructions where rd=x0, the space for possible HINTs is expanded considerably. VL could be used to indicate different hints. In addition, if predication is set, the predication register itself could hypothetically be passed in as a parameter to the HINT operation.

No specific hints are yet defined in Simple-V

Subsets of RV functionality

This section describes the differences when SV is implemented on top of different subsets of RV.

Common options

It is permitted to limit the size of either (or both) the register files down to the original size of the standard RV architecture. However, below the mandatory limits set in the RV standard will result in non-compliance with the SV Specification.

RV32 / RV32F

When RV32 or RV32F is implemented, XLEN is set to 32, and thus the maximum limit for predication is also restricted to 32 bits. Whilst not actually specifically an "option" it is worth noting.


Normally in standard RV32 it does not make much sense to have RV32G, however it is automatically implied to exist in RV32+SV due to the option for the element width to be doubled. This may be sufficient for implementors, such that actually needing RV32G itself (which makes no sense given that the RV32 integer register file is 32-bit) may be redundant.

It is a strange combination that may make sense on closer inspection, particularly given that under the standard RV32 system many of the opcodes to convert and sign-extend 64-bit integers to 64-bit floating-point will be missing, as they are assumed to only be present in an RV64 context.

RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)

When floating-point is not implemented, the size of the User Register and Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries per table).


In embedded scenarios the User Register and Predication CSRs may be dropped entirely, or optionally limited to 1 CSR, such that the combined number of entries from the M-Mode CSR Register table plus U-Mode CSR Register table is either 4 16-bit entries or (if the U-Mode is zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for the Predication CSR tables.

RV32E is the most likely candidate for simply detecting that registers are marked as "vectorised", and generating an appropriate exception for the VL loop to be implemented in software.


RV128 has not been especially considered, here, however it has some extremely large possibilities: double the element width implies 256-bit operands, spanning 2 128-bit registers each, and predication of total length 128 bit given that XLEN is now 128.

Under consideration

for element-grouping, if there is unused space within a register (3 16-bit elements in a 64-bit register for example), recommend:

  • For the unused elements in an integer register, the used element closest to the MSB is sign-extended on write and the unused elements are ignored on read.
  • The unused elements in a floating-point register are treated as-if they are set to all ones on write and are ignored on read, matching the existing standard for storing smaller FP values in larger registers.

info register,

One solution is to just not support LR/SC wider than a fixed implementation-dependent size, which must be at least  1 XLEN word, which can be read from a read-only CSR that can also be used for info like the kind and width of  hw parallelism supported (128-bit SIMD, minimal virtual  parallelism, etc.) and other things (like maybe the number  of registers supported). 

That CSR would have to have a flag to make a read trap so a hypervisor can simulate different values.

And what about instructions like JALR? 

answer: they're not vectorised, so not a problem