-
Notifications
You must be signed in to change notification settings - Fork 10
The plugin system and the in‐order pipeline
The in-order pipeline is a classic 5-stage RISC pipeline. This section, which explains the implementation, assumes basic knowledge of how such a pipeline works.
The pipeline is built as a linear sequences of stages and in principle, data is propagated each cycle from one stage to the next. At the end of each stage, a set of pipeline registers stores the data from the preceding stage to be used by the next stage. The propagation of the values in the pipeline registers is managed by the scheduling logic which, among other things, makes sure that hazards are resolved. The logic in the stages is not fixed but instead is added dynamically when building the pipeline through the use of plugins. Plugins typically create logic that reads the data in the preceding pipeline registers, transforms them, and then stores the results in the next pipeline registers. To ensure maximum reusability, plugins can make their logic available to other plugins by exposing services.
Each pipeline stage is represented by an instantiation of the Stage
class. Although this class is a Component
subclass, it does not contain any logic by itself. All logic is added to the stages by plugins. The only functionality implemented by the Stage
class is reading input pipeline registers (written by the previous stage) and writing output registers (to be read by the next stage).
To use this functionality, plugins can use the input
, output
, and value
methods of the Stage
class. All these methods return a Data
subtype that can be used to read from or write to an input or output. Multiple input and output registers can be added to stages and to identify which one should be accessed, objects that subclass PipelineData
are used.
The value
method is similar to input
. The difference is that when the output
has already been updated for a certain pipeline register, value
returns the updated value while input
always returns the original input value. In practice, value
should almost always be used instead of input
.
PipelineData
is a generic class that wraps around a SpinalHDL Data
subtype. To use this class as an identity for pipeline registers, object
identity is used. For example, the predefined pipeline register to store the program counter is defined as follows:
object PC extends PipelineData(UInt(32 bits))
The object
called PC
is used as the identifier passed to the input
, output
, and value
methods and, in this case, the return value of these methods will have type UInt
. See PipelineData.scala
for all predefined pipeline registers. The most important ones are the following:
-
IR
(UInt(32 bits)
): instruction register; -
RS1
/RS2
(UInt(5 bits)
): identifiers ofrs1
andrs2
; -
RS1_DATA
/RS2_DATA
(UInt(32 bits)
): values ofrs1
andrs2
. Note that due to various data hazards, these values or not always valid (see later); -
RD
(UInt(5 bits)
): identifier ofrd
; -
RD_DATA
(UInt(32 bits)
): value to be written tord
; -
WRITE_RD
(Bool
):True
if this instruction writes tord
; -
RD_VALID
(Bool
):True
ifRD_DATA
is valid.
Of course, plugins are not restricted to the predefined pipeline registers and can define new ones as needed.
For a pipeline register called FOO
, the following Verilog signals are created for each Stage
that uses it: in_FOO
(input
), out_FOO
(output
), and value_FOO
(value
). When inspecting the results of a simulation in GTKWave, most information can be gathered from these signals.
When building the pipeline, logic is added that propagates pipeline registers as needed. When an input
is requested at a particular stage, it will be propagated from the earliest stage where an output
with the same identifier is produced through all intermediate stages. If an input
is requested at a stage before the earliest output
stage, an error is produced. This logic also ensures that pipeline registers are not propagated further than the latest stage where an input
is requested in order to minimize flip-flop usage.
In general, this means plugins can simply produce and consume pipeline register values without having to worry about connecting all stages involved in the propagation.
The pipeline scheduling logic is responsible for resolving all data hazards. In order to do this, it needs information about when stages need certain register values. It also provides information to the stages about, for example, if they are currently allowed to execute.
The communication between the scheduling logic and the stages is facilitated by the Arbitration
class defined in Stage.scala
. The most important signals are the following (I/O direction from the perspective of Stage
, all signals are of type Bool
):
-
isValid
(input): is the instruction currently in this stage valid (i.e., does it eventually need to be executed); -
isStalled
(input): is this stage stalled due to external factors (e.g., waiting for a register value); -
isReady
(output): is this stage done executing the current instruction (used to implement multi-cycle logic in stages); -
rs1Needed
/rs2Needed
(output): does this stage need the value of theRS1_DATA
/RS2_DATA
pipeline registers. Used to signal to the scheduling logic that register values are needed in this cycle and that these values should be forwarded or the pipeline stalled;
Every stage has its own instantiation of Arbitration
called arbitration
which can be used by plugins. In the generated Verilog code, the signals are therefore called, for example, arbitration_isValid
.
Pipeline
is a trait that is implemented by the classes that build the pipeline structure. The current implementors are StaticPipeline
(in-order static pipeline) and DynamicPipeline
(out-of-order pipeline). Most plugins, including those used in this project, do not need to be aware of the underlying pipeline structure and only use the generic Pipeline
trait.
The full definition of this trait can be found in Pipeline.scala
, its most imported methods are listed here:
-
config
: returns theConfig
object of thePipeline
. This object contains global configuration, most importantlyxlen
which is the bit-width of the processor being built (seeConfig.scala
); -
data
: returns theStandardPipelineData
object containing the predefinedPipelineData
objects (seePipelineData.scala
); -
service[T]
: returns the service of typeT
(see alsohasService[T]: Boolean
andserviceOption[T]: Option[T]
for services that can be optionally included in the pipeline).
As mentioned before, all logic in stages as well as in the pipeline is created by plugins. A plugin is implemented by a class inheriting the Plugin
class. Plugins can override three methods to implement their logic:
-
setup
: used to perform any kind of configuration. Most often used to configure services offered by other plugins; -
build
: used to build any logic (called aftersetup
has been called on all plugins); -
finish
: used for any actions that should be performed after all logic was built (called afterbuild
has been called on all plugins).
Plugins have access to the Pipeline
and Config
objects through the pipeline
and config
methods, respectively.
Plugins can use the plug
method to add logic to stages. Most often, logic is added as an Area
. When doing this, the Area
will automatically be named as the plugin class so that all added signals are prefixed with this name.
A typical plugins looks something like this:
// Plugin[Pipeline] means we don't need to know the
// pipeline structure
class SomePlugin(stage: Stage) extends Plugin[Pipeline] {
override def setup(): Unit = {
// Configure services
}
override def build(): Unit = {
stage plug new Area {
// Allow to use methods of stage directly
import stage._
// Create logic
// For example, this will be named
// SomePlugin_someBool in Verilog
val someBool = Bool()
}
}
}
In this example, the plugin only adds logic to a single stage which is provided as a constructor argument. By providing more arguments, plugins can be created that add logic to multiple stages. For example, the RegisterFile
plugin adds register read logic to one stage and write logic to another.
Services are traits that can be implemented by plugins and used by other plugins. Currently, all service traits are defined in Services.scala
. The service
method defined in the Pipeline
trait can be used to get access to the plugin that implements a certain trait. The rest of this section will describe some of the most important services.
The DecoderService
trait is implemented by the Decoder
plugin in Decoder.scala
. This trait allows plugins to add decoding logic to the decode stage which can be used to implement new instructions. Decodings are specified using the following parameters:
-
opcode
(MaskedLiteral
): bitmask that is matched against the full 32-bit instruction register. When it matches, the following actions are applied; -
itype
(InstructionType
, seeRiscV.scala
): specifies which instruction type should be used for decoding. This will ensure that theRS1
,RS2
,RD
, andIMM
pipeline registers are automatically decoded; -
action
(Map[PipelineData, Data]
): mapping from pipeline registers to the value that should be stored in them whenever an instruction matchesopcode
.
To specify a decoding, the configure
method should be called on a DecoderService
. This gives access to a DecoderConfig
object on which the addDecoding
method can be called. It also offers a addDefault
method to specify default values for pipeline registers.
As an example, we show the relevant parts of the ecall
instruction implementation (see MachineMode.scala
for the full details). The basic idea is to define a new pipeline register called ECALL
which is set to True
whenever the ecall
opcode is detected. Then, inside build
, we add logic that is triggered when the ECALL
pipeline register is asserted:
class MachineMode(stage: Stage) extends Plugin[Pipeline] {
object Data {
object ECALL extends PipelineData(Bool())
}
object Opcodes {
val ECALL = M"00000000000000000000000001110011"
}
override def setup(): Unit = {
pipeline.service[DecoderService].configure {config =>
config.addDefault(Map(Data.ECALL -> False))
config.addDecoding(Opcodes.ECALL, InstructionType.I,
Map(Data.ECALL -> True))
}
}
override def build(): Unit = {
stage plug new Area {
import stage._
when (arbitration.isValid) {
when (value(Data.ECALL)) {
// Instruction logic
}
}
}
}
}
Certain instructions need to perform arithmetic operations as part of their functionality. To prevent needing multiple hardware instantiations of (often expensive) arithmetic circuits, the IntAluService
trait offers a way for plugins to request the ALU
plugin to perform operations on their behalf. To this end, it provides the addOperation
method which allows a plugin to specify an opcode which the ALU
should recognize, which operation should be performed, and what the left- and right-hand sides should be. The result of the arithmetic operation is stored in the pipeline register identified by the PipelineData
object returned by the resultData
method.
As an example, the jal
instruction performs a program-counter-relative jump specified by an immediate offset. Hence, the target address is the sum of the program counter and the immediate encoded in the instruction. This is the relevant code in the BranchUnit
plugin (see BranchUnit.scala
):
override def setup(): Unit = {
val alu = pipeline.service[IntAluService]
alu.addOperation(Opcodes.JAL, alu.AluOp.ADD,
alu.Src1Select.PC,
alu.Src2Select.IMM)
...
}
override def build(): Unit = {
stage plug new Area {
import stage._
val alu = pipeline.service[IntAluService]
val target = value(alu.resultData)
...
}
}
The JumpService
trait offers the functionality to make the pipeline perform a jump. Its jump
method takes the Stage
that performs the jump and the target address as argument.