BrainIO

WalkAlong: 1



Introduction

This book teaches the design of different abstract layers in a computer model, starting from the high level programming, to the application software on which it executes, the compiler that compiles the program, the Virtual machine on which it runs, and the hardware CPU on which the software layer is built on. It’s a great read just like most books on computer architecture but it uses a practical hands-on approach to convey the design process while also explaining explicitly the need for each new concept. I used a holistic approach to introduce each new abstract layer and using basic grammar construct to avoid throwing the reader off.

The first twelve chapters walks you through the design of the hardware which is an SoC on FPGA and a virtual machine that can run on it. I named it ‘BrainIO’ because of its smart 32 bi-directional I/O pins. Each chapter introduces a new concept in the architecture of every basic computer and why they are necessary. The SoC is written in VHDL and the VM is written in C. The gradual step-by-step process gives the book its name, ‘WalkAlong1’.

In ‘WalkAlong2’, I would introduce the compiler which I built entirely on C++. This compiler compiles C++ programs for our VM. Two new changes in WalkAlong2 which will change the course our research. First, I have used a standard ISA, the RISC-V ISA. This ISA was developed in Uc Berkeley much easier to implement than x86 or ARM. Therefore, a much better design of our chip would be in the second edition of this book. It would implement the stack memory, program memory, and data memory all on one memory unit. New concept like frame pointer register, stack pointer register, return address register etc. would also be introduced.

The second idea which introduces a new research interest is the transition from serial programming to parallel programming using GPUs. This would be in ‘WalkAlong3’ where new concepts including: GPGPU programming, Opencl, CUDA, PyTorch, training neural networks would be introduced. GPUs unlike CPUs have larger memory bandwidth which makes them ideal for parallel programming which is used for vector arithmetic’s. This GPU advantage makes it a very useful in training neural network which involves complex matrix arithmetic’s. So brace up and have fun.

Home page Author