Computer Architecture: Out-of-Order Execution. Prof. Onur Mutlu (editted by Seth) Carnegie Mellon University

Similar documents
Out-of-order Pipeline. Register Read. OOO execution (2-wide) OOO execution (2-wide) OOO execution (2-wide) OOO execution (2-wide)

Lecture 14: Instruction Level Parallelism

Parallelism I: Inside the Core

Computer Architecture 计算机体系结构. Lecture 3. Instruction-Level Parallelism I 第三讲 指令级并行 I. Chao Li, PhD. 李超博士

Lecture 20: Parallelism ILP to Multicores. James C. Hoe Department of ECE Carnegie Mellon University

Advanced Superscalar Architectures. Speculative and Out-of-Order Execution

Unit 9: Static & Dynamic Scheduling

CIS 371 Computer Organization and Design

Tomasulo-Style Register Renaming

CIS 371 Computer Organization and Design

To read more. CS 6354: Tomasulo. Intel Skylake. Scheduling. How can we reorder instructions? Without changing the answer.

CS 6354: Tomasulo. 21 September 2016

COSC 6385 Computer Architecture. - Tomasulos Algorithm

Code Scheduling & Limitations

Computer Architecture ELE 475 / COS 475 Slide Deck 6: Superscalar 3. David Wentzlaff Department of Electrical Engineering Princeton University

Decoupling Loads for Nano-Instruction Set Computers

Chapter 3: Computer Organization Fundamentals. Oregon State University School of Electrical Engineering and Computer Science.

Advanced Superscalar Architectures

Optimality of Tomasulo s Algorithm Luna, Dong Gang, Zhao

Announcements. Programming assignment #2 due Monday 9/24. Talk: Architectural Acceleration of Real Time Physics Glenn Reinman, UCLA CS

Anne Bracy CS 3410 Computer Science Cornell University. [K. Bala, A. Bracy, S. McKee, E. Sirer, H. Weatherspoon]

CS 152 Computer Architecture and Engineering. Lecture 15 - Advanced Superscalars

CS 152 Computer Architecture and Engineering. Lecture 14 - Advanced Superscalars

PIPELINING: BRANCH AND MULTICYCLE INSTRUCTIONS

CMU Introduction to Computer Architecture, Spring 2013 HW 3 Solutions: Microprogramming Wrap-up and Pipelining

6.823 Computer System Architecture Prerequisite Self-Assessment Test Assigned Feb. 6, 2019 Due Feb 11, 2019

CSCI 510: Computer Architecture Written Assignment 2 Solutions

CS152: Computer Architecture and Engineering Introduction to Pipelining. October 22, 1997 Dave Patterson (http.cs.berkeley.

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 10 Instruction-Level Parallelism Part 3

DAT105: Computer Architecture Study Period 2, 2009 Exercise 2 Chapter 2: Instruction-Level Parallelism and Its Exploitation

CS 152 Computer Architecture and Engineering

Hakim Weatherspoon CS 3410 Computer Science Cornell University

ENGN1640: Design of Computing Systems Topic 05: Pipeline Processor Design

Improving Performance: Pipelining!

Pipelining A B C D. Readings: Example: Doing the laundry. Ann, Brian, Cathy, & Dave. each have one load of clothes to wash, dry, and fold

Topics on Compilers. Introduction to CGRA

Drowsy Caches Simple Techniques for Reducing Leakage Power Krisztián Flautner Nam Sung Kim Steve Martin David Blaauw Trevor Mudge

Techniques, October , Boston, USA. Personal use of this material is permitted. However, permission to

CIS 662: Sample midterm w solutions

Improving Memory System Performance with Energy-Efficient Value Speculation

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

Near-Optimal Precharging in High-Performance Nanoscale CMOS Caches

FUEL ECONOMY STANDARDS: THERE IS NO TRADEOFF WITH SAFETY, COST, AND FLEET TURNOVER. July 24, 2018 UPDATE. Jack Gillis Executive Director

Pipelined MIPS Datapath with Control Signals

Hybrid Myths in Branch Prediction

EECS 583 Class 9 Classic Optimization

Understanding the benefits of using a digital valve controller. Mark Buzzell Business Manager, Metso Flow Control

Programming Languages (CS 550)

Chapter 2 ( ) -Revisit ReOrder Buffer -Exception handling and. (parallelism in HW)

Overcurrent protection

Pipeline Hazards. See P&H Chapter 4.7. Hakim Weatherspoon CS 3410, Spring 2013 Computer Science Cornell University

mith College Computer Science CSC231 Assembly Fall 2017 Week #4 Dominique Thiébaut

Direct-Mapped Cache Terminology. Caching Terminology. TIO Dan s great cache mnemonic. UCB CS61C : Machine Structures

Computer Architecture and Parallel Computing 并行结构与计算. Lecture 5 SuperScalar and Multithreading. Peng Liu

NORDAC 2014 Topic and no NORDAC

Intelligent Fault Analysis in Electrical Power Grids

Pipeline Hazards. See P&H Chapter 4.7. Hakim Weatherspoon CS 3410, Spring 2013 Computer Science Cornell University

NetLogo and Multi-Agent Simulation (in Introductory Computer Science)

CMPEN 411 VLSI Digital Circuits Spring Lecture 20: Multiplier Design

ZEPHYR FAQ. Table of Contents

index changing a variable s value, Chime My Block, clearing the screen. See Display block CoastBack program, 54 44

RAM-Type Interface for Embedded User Flash Memory

Storage and Memory Hierarchy CS165

HARDWIRE VS. WIRELESS FAILSAFE CONTROL SYSTEM. The answer is No.

EEL Project Design Report: Automated Rev Matcher. January 28 th, 2008

Porsche unveils 4-door sports car

CS 250! VLSI System Design

UC Berkeley CS61C : Machine Structures

In-Place Associative Computing:

TRITON ERROR CODES ERROR CODE MODEL SERIES DESCRIPTION RESOLUTION

Using Advanced Limit Line Features

Ontario s Large Truck Studies A s t r o n g t r a n s p o r t a t i o n f u t u r e t o g e t h e r

128Mb DDR SDRAM. Features. Description. REV 1.1 Oct, 2006

SDRAM AS4SD8M Mb: 8 Meg x 16 SDRAM Synchronous DRAM Memory. PIN ASSIGNMENT (Top View)

IS42S32200L IS45S32200L

feature Window Pain 22 the bimmer pub

SMART MICRO GRID IMPLEMENTATION

Critical Chain Project Management (CCPM)

IS42S32200C1. 512K Bits x 32 Bits x 4 Banks (64-MBIT) SYNCHRONOUS DYNAMIC RAM

Amtrak Signal and Train Control Systems PRACTICAL PTC. On Amtrak Owned Property. November 20, 2008

Energy Efficient Content-Addressable Memory

Chapter 10 And, Finally... The Stack

Enhancing Energy Efficiency of Database Applications Using SSDs

Frequently Asked Questions: EMC Captiva 7.5

A48P4616B. 16M X 16 Bit DDR DRAM. Document Title 16M X 16 Bit DDR DRAM. Revision History. AMIC Technology, Corp. Rev. No. History Issue Date Remark

WESTERN INTERCONNECTION TRANSMISSION TECHNOLGOY FORUM

New Energy-Saving Technology

Wind Turbine Emulation Experiment

Mandatory Experiment: Electric conduction

Sinfonia: a new paradigm for building scalable distributed systems

State of the art ISA, LKAS & AEB. Yoni Epstein ADAS Program Manager Advanced Development

INVESTIGATION ONE: WHAT DOES A VOLTMETER DO? How Are Values of Circuit Variables Measured?

SPEED IN URBAN ENV VIORNMENTS IEEE CONFERENCE PAPER REVIW CSC 8251 ZHIBO WANG

CMPEN 411 VLSI Digital Circuits Spring Lecture 24: Peripheral Memory Circuits

HYB25D256400/800AT 256-MBit Double Data Rata SDRAM

Helping Moore s Law: Architectural Techniques to Address Parameter Variation

Analyzing Feature Interactions in Automobiles. John Thomas, Ph.D. Seth Placke

Torsen Differentials - How They Work and What STaSIS Does to Improve Them For the Audi Quattro

Document ID:

Breakout Session 1 Report-out presentations

Transcription:

Computer Architecture: Out-of-Order Execution Prof. Onur Mutlu (editted by Seth) Carnegie Mellon University

Reading for Today Smith and Sohi, The Microarchitecture of Superscalar Processors, Proceedings of the IEEE, 1995 More advanced pipelining Interrupt and exception handling Out-of-order and superscalar execution concepts 2

An In-order Pipeline E Integer add F D E E E E Integer mul E E E E E E E E FP mul R W E E E E E E E E... Cache miss Problem: A true data dependency stalls dispatch of younger instructions into functional (execution) units Dispatch: Act of sending an instruction to a functional unit 3

Can We Do Better? What do the following two pieces of code have in common (with respect to execution in the previous design)? IMUL R3 R1, R2 ADD R3 R3, R1 ADD R1 R6, R7 IMUL R5 R6, R8 ADD R7 R3, R5 LD R3 R1 (0) ADD R3 R3, R1 ADD R1 R6, R7 IMUL R5 R6, R8 ADD R7 R3, R5 Answer: First ADD stalls the whole pipeline! ADD cannot dispatch because its source registers unavailable Later independent instructions cannot get executed How are the above code portions different? Answer: Load latency is variable (unknown until runtime) What does this affect? Think compiler vs. microarchitecture 4

Preventing Dispatch Stalls Multiple ways of doing it You have already seen THREE: 1. Fine-grained multithreading 2. Value prediction 3. Compile-time instruction scheduling/reordering 5

Preventing Dispatch Stalls Multiple ways of doing it You have already seen THREE: 1. Fine-grained multithreading 2. Value prediction 3. Compile-time instruction scheduling/reordering What are the disadvantages of the above three? Any other way to prevent dispatch stalls? Actually, you have briefly seen the basic idea before Dataflow: fetch and fire an instruction when its inputs are ready Problem: in-order dispatch (scheduling, or execution) Solution: out-of-order dispatch (scheduling, or execution) 6

Out-of-order Execution (Dynamic Scheduling) Idea: Move the dependent instructions out of the way of independent ones Rest areas for dependent instructions: Reservation stations Monitor the source s of each instruction in the resting area When all source s of an instruction are available, fire (i.e. dispatch) the instruction Instructions dispatched in dataflow (not control-flow) order Benefit: Latency tolerance: Allows independent instructions to execute and complete in the presence of a long latency operation 7

In-order vs. Out-of-order Dispatch In order dispatch + precise exceptions: F D E E E E R W F D STALL E R W F STALL D E R W F D E E E E E R W F D STALL E R W IMUL R3 R1, R2 ADD R3 R3, R1 ADD R1 R6, R7 IMUL R5 R6, R8 ADD R7 R3, R5 Out-of-order dispatch + precise exceptions: F D E E E E R W F D F D WAIT E R E R W W F D E E E E R W F D WAIT E R W Add waits on multiply producing R3 Commit happens in-order 16 vs. 12 cycles 8

Enabling OoO Execution 1. Need to link the consumer of a to the producer Register renaming: Associate a tag with each data 2. Need to buffer instructions until they are ready to execute Insert instruction into reservation stations after renaming 3. Instructions need to keep track of readiness of source s Broadcast the tag when the is produced Instructions compare their source tags to the broadcast tag if match, source becomes ready 4. When all source s of an instruction are ready, need to dispatch the instruction to its functional unit (FU) Instruction wakes up if all sources are ready If multiple instructions are awake, need to select one per FU 9

Tomasulo s Algorithm OoO with register renaming invented by Robert Tomasulo Used in IBM 360/91 Floating Point Units Read: Tomasulo, An Efficient Algorithm for Exploiting Multiple Arithmetic Units, IBM Journal of R&D, Jan. 1967. What is the major difference today? Precise exceptions: IBM 360/91 did NOT have this Patt, Hwu, Shebanow, HPS, a new microarchitecture: rationale and introduction, MICRO 1985. Patt et al., Critical issues regarding HPS, a high performance microarchitecture, MICRO 1985. Variants used in most high-performance processors Initially in Intel Pentium Pro, AMD K5 Alpha 21264, MIPS R10000, IBM POWER5, IBM z196, Oracle UltraSPARC T4, ARM Cortex A15 10

Two Humps in a Modern Pipeline TAG and VALUE Broadcast Bus F D S C H E D U L E E Integer add E E E E Integer mul E E E E E E E E FP mul E E E E E E E E... R E O R D E R W Load/store in order out of order in order Hump 1: Reservation stations (scheduling window) Hump 2: Reordering (reorder buffer, aka instruction window or active window) 11

General Organization of an OOO Processor Reservation stations Smith and Sohi, The Microarchitecture of Superscalar Processors, Proc. IEEE, Dec. 1995. 12

Tomasulo s Machine: IBM 360/91 from memory from instruction unit FP registers load buffers store buffers operation bus FP FU FP FU reservation stations to memory Common data bus 13

Register Renaming Output and anti dependencies are not true dependencies WHY? The same register refers to s that have nothing to do with each other They exist because not enough register ID s (i.e. names) in the ISA The register ID is renamed to the reservation station entry that will hold the register s Register ID RS entry ID Architectural register ID Physical register ID After renaming, RS entry ID used to refer to the register This eliminates anti- and output- dependencies Approximates the performance effect of a large number of registers even though ISA has a small number 14

Tomasulo s Algorithm: Renaming Register rename table (register alias table) tag valid? R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 1 1 1 1 1 1 1 1 1 1 15

Tomasulo s Algorithm If reservation station available before renaming Instruction + renamed operands (source /tag) inserted into the reservation station Only rename if reservation station is available Else stall While in reservation station, each instruction: Watches common data bus (CDB) for tag of its sources When tag seen, grab for the source and keep it in the reservation station When both operands available, instruction ready to be dispatched Dispatch instruction to the Functional Unit when instruction is ready After instruction finishes in the Functional Unit Arbitrate for CDB Put tagged onto CDB (tag broadcast) Register file is connected to the CDB Register contains a tag indicating the latest writer to the register If the tag in the register file matches the broadcast tag, write broadcast into register (and set valid bit) Reclaim rename tag no valid copy of tag in system! 16

An Exercise MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 F D E W Assume ADD (4 cycle execute), MUL (6 cycle execute) Assume one adder and one multiplier How many cycles in a non-pipelined machine in an in-order-dispatch pipelined machine with imprecise exceptions (no forwarding and full forwarding) in an out-of-order dispatch pipelined machine imprecise exceptions (full forwarding) 17

Exercise Continued 18

Exercise Continued With forwarding 19

Exercise Continued MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 20

How It Works Cycle 0 r1 1 1 r2 1 2 r3 1 3 r4 1 4 r5 1 5 r6 1 6 r7 1 7 r8 1 8 r9 1 9 r10 1 10 r11 1 11 a b c d + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 21

How It Works Cycle 1 r1 1 1 r2 1 2 r3 1 3 r4 1 4 r5 1 5 r6 1 6 r7 1 7 r8 1 8 r9 1 9 r10 1 10 r11 1 11 a b c d + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 22

How It Works Cycle 2 r1 1 1 r2 1 2 r3 0 x? r4 1 4 r5 1 5 r6 1 6 r7 1 7 r8 1 8 r9 1 9 r10 1 10 r11 1 11 a b c d 1 1 + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 23

How It Works Cycle 3 r1 1 1 r2 1 2 r3 0 x? r4 1 4 r5 0 a? r6 1 6 r7 1 7 r8 1 8 r9 1 9 r10 1 10 r11 1 11 a b c d 0 x? 1 4 1 1 + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 X in E (1) 24

How It Works Cycle 4 r1 1 1 r2 1 2 r3 0 x? r4 1 4 r5 0 a? r6 1 6 r7 0 b? r8 1 8 r9 1 9 r10 1 10 r11 1 11 a b c d 0 x? 1 2 1 4 1 6 1 1 + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 X in E (2) 25

How It Works Cycle 5 r1 1 1 r2 1 2 r3 0 x? MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 r4 1 4 r5 0 a? r6 1 6 r7 0 b? r8 1 8 r9 1 9 a b c d 0 x? 1 2 1 8 1 4 1 6 1 9 x y z 1 1 1 2 r10 0 c? r11 1 11 b in E (1) + * X in E (3) 26

How It Works Cycle 6 r1 1 1 r2 1 2 r3 0 x? r4 1 4 r5 0 a? r6 1 6 r7 0 b? r8 1 8 r9 1 9 r10 0 c? r11 0 y? a b c d 0 x? 1 2 1 8 1 4 1 6 1 9 c in E (1) b in E (2) 1 1 0 b? + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 0 C? X in E (4) 27

How It Works Cycle 7 r1 1 1 r2 1 2 r3 0 x? r4 1 4 r5 0 d? r6 1 6 r7 0 b? r8 1 8 r9 1 9 r10 0 c? r11 0 y? a b c d 0 x? 1 2 1 8 0 a? 1 4 1 6 1 9 0 y? c in E (2) b in E (3) 1 1 0 b? + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 0 C? X in E (5) 28

How It Works Cycle 8 r1 1 1 r2 1 2 r3 0 x 2 r4 1 4 r5 0 d? r6 1 6 r7 0 b 8 r8 1 8 r9 1 9 r10 0 c? r11 0 y? a b c d 1 x 2 1 2 1 8 0 a? 1 4 1 6 1 9 0 y? c in E (3) b in E (4) 1 1 1 b 8 + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 0 C? X in E (6) 29

How It Works Cycle 9 r1 1 1 r2 1 2 r3 1 x 2 r4 1 4 r5 0 d? r6 1 6 r7 1 b 8 r8 1 8 r9 1 9 r10 0 c 17 r11 0 y? a b c d 1 x 2 1 2 1 8 0 a? 1 4 1 6 1 9 0 y? A in E (1) 1 1 1 b 8 + c in E (4) * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 1 C 17 30

How It Works Cycle 10 r1 1 1 r2 1 2 r3 1 2 r4 1 4 r5 0 d? r6 1 6 r7 1 8 r8 1 8 r9 1 9 r10 1 c 17 r11 0 y? a b c d 1 x 2 1 2 1 8 0 a? 1 4 1 6 1 9 0 y? A in E (2) 1 1 1 b 8 + * x y z MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 1 2 1 C 17 y in E (1) 31

An Exercise, with Precise Exceptions MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 F D E R W Assume ADD (4 cycle execute), MUL (6 cycle execute) Assume one adder and one multiplier How many cycles in a non-pipelined machine in an in-order-dispatch pipelined machine with reorder buffer (no forwarding and full forwarding) in an out-of-order dispatch pipelined machine with reorder buffer (full forwarding) 38

Out-of-Order Execution with Precise Exceptions Idea: Use a reorder buffer to reorder instructions before committing them to architectural state An instruction updates the register alias table (essentially a future file) when it completes execution An instruction updates the architectural register file when it is the oldest in the machine and has completed execution 39

Out-of-Order Execution with Precise Exceptions TAG and VALUE Broadcast Bus F D S C H E D U L E E Integer add E E E E Integer mul E E E E E E E E FP mul E E E E E E E E... R E O R D E R W Load/store in order out of order in order Hump 1: Reservation stations (scheduling window) Hump 2: Reordering (reorder buffer, aka instruction window or active window) 40

Enabling OoO Execution, Revisited 1. Link the consumer of a to the producer Register renaming: Associate a tag with each data 2. Buffer instructions until they are ready Insert instruction into reservation stations after renaming 3. Keep track of readiness of source s of an instruction Broadcast the tag when the is produced Instructions compare their source tags to the broadcast tag if match, source becomes ready 4. When all source s of an instruction are ready, dispatch the instruction to functional unit (FU) Wakeup and select/schedule the instruction 41

Summary of OOO Execution Concepts Register renaming eliminates false dependencies, enables linking of producer to consumers Buffering enables the pipeline to move for independent ops Tag broadcast enables communication (of readiness of produced ) between instructions Wakeup and select enables out-of-order dispatch 42

OOO Execution: Restricted Dataflow An out-of-order engine dynamically builds the dataflow graph of a piece of the program which piece? The dataflow graph is limited to the instruction window Instruction window: all decoded but not yet retired instructions Can we do it for the whole program? Why would we like to? In other words, how can we have a large instruction window? Can we do it efficiently with Tomasulo s algorithm? 43

Dataflow Graph for Our Example MUL R3 R1, R2 ADD R5 R3, R4 ADD R7 R2, R6 ADD R10 R8, R9 MUL R11 R7, R10 ADD R5 R5, R11 44

State of RAT and RS in Cycle 7 45

Dataflow Graph 46

Restricted Data Flow An out-of-order machine is a restricted data flow machine Dataflow-based execution is restricted to the microarchitecture level ISA is still based on von Neumann model (sequential execution) Remember the data flow model (at the ISA level): Dataflow model: An instruction is fetched and executed in data flow order i.e., when its operands are ready i.e., there is no instruction pointer Instruction ordering specified by data flow dependence Each instruction specifies who should receive the result An instruction can fire whenever all operands are received 47

Questions to Ponder Why is OoO execution beneficial? What if all operations take single cycle? Latency tolerance: OoO execution tolerates the latency of multi-cycle operations by executing independent operations concurrently What if an instruction takes 500 cycles? How large of an instruction window do we need to continue decoding? How many cycles of latency can OoO tolerate? What limits the latency tolerance scalability of Tomasulo s algorithm? Active/instruction window size: determined by register file, scheduling window, reorder buffer 48

Registers versus Memory, Revisited So far, we considered register based communication between instructions What about memory? What are the fundamental differences between registers and memory? Register dependences known statically memory dependences determined dynamically Register state is small memory state is large Register state is not visible to other threads/processors memory state is shared between threads/processors (in a shared memory multiprocessor) 49

Memory Dependence Handling (I) Need to obey memory dependences in an out-of-order machine and need to do so while providing high performance Observation and Problem: Memory address is not known until a load/store executes Corollary 1: Renaming memory addresses is difficult Corollary 2: Determining dependence or independence of loads/stores need to be handled after their execution Corollary 3: When a load/store has its address ready, there may be younger/older loads/stores with undetermined addresses in the machine 50

Memory Dependence Handling (II) When do you schedule a load instruction in an OOO engine? Problem: A younger load can have its address ready before an older store s address is known What if M[r5] == r4? Ld r2,r5 ; r2 <- M[r5] St r1,r2 ; M[r2] <- r1 Ld r3,r4 ; r3 <- M[r4] 51

Memory Dependence Handling (II) When do you schedule a load instruction in an OOO engine? Problem: A younger load can have its address ready before an older store s address is known Known as the memory disambiguation problem or the unknown address problem Approaches Conservative: Stall the load until all previous stores have computed their addresses (or even retired from the machine) Aggressive: Assume load is independent of unknown-address stores and schedule the load right away Intelligent: Predict (with a more sophisticated predictor) if the load is dependent on the/any unknown address store 52

Handling of Store-Load Dependencies A load s dependence status is not known until all previous store addresses are available. How does the OOO engine detect dependence of a load instruction on a previous store? Option 1: Wait until all previous stores committed (no need to check) Option 2: Keep a list of pending stores in a store buffer and check whether load address matches a previous store address How does the OOO engine treat the scheduling of a load instruction wrt previous stores? Option 1: Assume load dependent on all previous stores Option 2: Assume load independent of all previous stores Option 3: Predict the dependence of a load on an outstanding store 53

Memory Disambiguation (I) Option 1: Assume load dependent on all previous stores + No need for recovery -- Too conservative: delays independent loads unnecessarily Option 2: Assume load independent of all previous stores + Simple and can be common case: no delay for independent loads -- Requires recovery and re-execution of load and dependents on misprediction Option 3: Predict the dependence of a load on an outstanding store + More accurate. Load store dependencies persist over time -- Still requires recovery/re-execution on misprediction Alpha 21264 : Initially assume load independent, delay loads found to be dependent Moshovos et al., Dynamic speculation and synchronization of data dependences, ISCA 1997. Chrysos and Emer, Memory Dependence Prediction Using Store Sets, ISCA 1998. 54

Memory Disambiguation (II) Chrysos and Emer, Memory Dependence Prediction Using Store Sets, ISCA 1998. Predicting store-load dependencies important for performance Simple predictors (based on past history) can achieve most of the potential performance 55

Food for Thought for You Many other design choices Should reservation stations be centralized or distributed? What are the tradeoffs? Should reservation stations and ROB store data s or should there be a centralized physical register file where all data s are stored? What are the tradeoffs? Exactly when does an instruction broadcast its tag? 56

More Food for Thought for You How can you implement branch prediction in an out-oforder execution machine? Think about branch history register and PHT updates Think about recovery from mispredictions How to do this fast? How can you combine superscalar execution with out-oforder execution? These are different concepts Concurrent renaming of instructions Concurrent broadcast of tags How can you combine superscalar + out-of-order + branch prediction? 57

Recommended Readings Kessler, The Alpha 21264 Microprocessor, IEEE Micro, March-April 1999. Boggs et al., The Microarchitecture of the Pentium 4 Processor, Intel Technology Journal, 2001. Yeager, The MIPS R10000 Superscalar Microprocessor, IEEE Micro, April 1996 Tendler et al., POWER4 system microarchitecture, IBM Journal of Research and Development, January 2002. 58