Followed the datasheet and still, I cannot write even a single bit into the AT28C256 EEPROM I searched online and found everyone having the same problem was trying to make a homebrew computer, either Z80, 6502 or 8088. So I'm the latest victim in #retrocomputing ? 🤣
The general conclusion seems to be that there's either a hidden timing issue somewhere (but I don't have a 32-channel logic analyzer to spot it...), or the chips are Shenzhen counterfeit... .
And now here's the Eureka moment! In the AT28C256 datasheet I've just noticed this humble and inconspicuous line.
> All command sequences must conform to the PAGE WRITE timing specifications.
Unlocking the chip requires a PAGE WRITE timing sequence, but I was doing all the debugging based on the more commonly used BYTE WRITE timing sequence! 😱
2. Use the logic analyzer to probe every single pin... ✅
Case closed! Original chips are real and working. But two jumper wires on the address bus were broken! When I was trying to check them on the scope, the probe would bend and reconnect them
Lesson learned: Do NOT attempt to debug a parallel bus without a logic analyzer (if not possible, even downclocking the bus and using a program to sample from devboard GPIOs may still be better than a scope along), or you'll waste days of time!
I've just written the first test program in Z80 assembly. I hope it would blink some LEDs after I finish building the I/O logic on breadboards tomorrow... #retrocomputing
Using the data bus to light the LED when there's no I/O controller (yet) turned out to be a bad idea. I modified the program to poke the high memory address instead. #retrocomputing
After running the initial test, it's time to add proper address decoding logic. Before doing so, I decided to fully buffer the address/data bus to make them robust.
Now both buses have been fully buffered, but I'm now running out of wires... Need to order several hundreds more to build the decoder... 🤣 #retrocomputing
The address decoding logic is complete! Now my Z80 computer has 16 KiB of RAM (but untested). A latch is added to the data bus as a temporary I/O interface.Time to create a console by bitbanging some UART traffic for further development...
Found two bugs that prevented memory access. I've written down the wrong truth table that was responsible for the missing chip select, and I discovered an ENABLE pin was floating, that explained the random results.
But now the memtest routine would reports ~500 bytes of good RAM, then it suddenly freezes the Z80 CPU mysteriously . No M1 opcode fetch, no memory refresh, nothing at all...
Well, at least 500 bytes are more than enough to write a better diagnostic program.
My breadboard Z80 was freezing randomly after reading ~500 bytes of RAM. I probed the power supply, and saw THIS... No wonder why the system was crashing.
I knew breadboards have all sorts of problems, but you can never imagine how crazy it can become until you've built a computer on it. 🤣
Added some capacitors. Now the random crashes have disappeared. Decoupling is serious business!
Meanwhile, I'm now having predictable crashes instead... Must be another bug in my decoder.
I saw my Z80 was still glitching, and I found the NMI input was floating... Hard to keep track of every single pin on a breadboard. Presumably, spurious interrupts messed it up, simply pull it high has solved the problem.
I polished the test a bit as well. Finally, my Z80 has passed the complete memtest, full 16 KiB (*) of RAM is now operational! Next step is adding a TMS9928A video output.
(*) it only shows 16368 bytes, 16 bytes of RAM was reserved for the test routine itself.
The TI TMS9918/28 video chip needs a 10.738 MHz (± 0.005 MHz) clock... Those weird NTSC frequency multiples from the analog days. Good luck finding a crystal of this frequency in the toolbox nowadays (it's #retrocomputing, no PLL/DSS synthesizer cheating ). I was able to find some vendors of this frequency but I doubt it has stocked for immediate shipping. Easier to just buy a 21.477 MHz crystal with a CD4017 counter to divide it by two.
I spent some time moving all the vintage chips to ESD-safe tubes for the peace of mind in future handling.
Most of them are older than me, here they are,
* TMS9928 video chip, NMOS, from Texas Instruments, made in 1982.
* Z80 CPU, CMOS 6 MHz version, 1999; PIO, 2002; SIO, revision 2, 1998, CTC, 1986.
* Yamaha 2608 sound chip w/ YM3016 DAC, 1993.
* M5M5165P 16-KiB SRAM, from Mitsubishi, production date unclear, but possibly from mid-80s before the Japaneses economic bubble burst.
I'm still waiting for more parts needed for the video chip, meanwhile it's a good idea to draw a schematic for the existing breadboard... And it seems drawing a simple buffered address/data bus already requires one to click the mouse 😵 few hundreds times and has taken 2/3 of an entire page. Time to reconsider to purchasing a trackball?
I'm reviewing the datasheet of YM2608 as I cannot understand what's the huge DIP-64 package doing here.
And... The SOUND CHIP has two groups of 8-pin GPIOs?! WTF?
So I can throw my Z80-PIO away now?
Well, I guess the decision was to reduce system integration costs and make it easier to add things like an audio interface to music synthesizers. But still looks unusual.
tired: control LEDs with Arudino
retired: flashing LEDs with a Z80.
inspired: flashing LEDs with a Yamaha sound chip.
Not dead yet, my breadboard Z80 project is still ongoing. Now I'm trying to code some example programs, and struggling to learn the proper way to crunch 24-bit numbers.
Z80 does not have an effective (base+offset) nor (pointer+offset) addressing mode. Even "uint16_t tmp = array[i]" is a challenge to C compiler in itself. In asm, to walk through a bunch of numbers, one has to arrange the order of computation carefully so that the crunching and looping can be done simultaneously. 🤔🤔🤔
I was trying to modify a Z80 ICE emulator to read user inputs via GNU readline. But all attempts have failed, BSD libedit doesn't work either.
Sometimes "ld" reports a mysterious error: unresolvable R_X86_64_PC32 relocation against symbol `SP'
Or program segfaults at: set_curterm_sp()
Wait... WTF is "SP"? 🤔
After debugging for two hours, I found the symbol "SP" is the "SCREEN pointer" in curses. Unluckily, the Z80 emulator defined the global variable "SP", for the Z80 "SP" register.
I'm about to finish the first program for my #Z80 computer - a π calculating program, this naive algorithm based on arctan() series should be able to calculate π to 1000 digits and more. I'm not a 1337 assembly writer, the subroutine takes ~3500000 T-states (~1 second) to calculate a ONE decimal digit. I also need to implement a ring-buffer to handle repeated carry if the digit exceeds 10, so the actual program will be even slower, well, better to make it work first...
Fantastic. I got the ring-buffer working, now it handles results greater than 9. It's the first time that I've written ~30 lines of assembly which was not totally wrong, and worked like magic after correcting just 3 typos. Now it should calculate π to many digits. Time to try it on the real #Z80. It should take 20 minutes to compute 1000 digits, will be a good benchmark to uncover any hardware issues...
I loaded the π program to my breadboard #Z80, and the computer gives a different result on each run, none is correct... It must mean there's a hardware-level memory issue that corrupted some bytes, then got amplified by the algorithm. Let's hope the issue has a direct fix and not due to the signal integrity of the breadboard wiring, or I have to start making PCBs...
My π program accomplished its intended purpose at least, "uncover hardware issues", so it's a "successive failure"?
Okay, the breadboard #Z80 is now completely impossible to debug. Even my design was divided to different functional blocks, with the board at the center being the "backplane", but now it's essentially unreachable for any probes as the bus wires are everywhere. Now I understand why was the S-100 bus invented, which meant to solve precisely this problem... Time to rewire everything to let the bus spreads vertically, not horizontally...
And no PCBs yet as the design is not complete.
I decided to search for more information about my M5M5165P 8-KiB (not 16, it was my typo) SRAM, especially the introduction year.
Unsurprisingly, a complete 1985 Mitsubishi databook is available from Bitsavers.org, and it says my chip was a "new product", bingo (my actual chips were made in the mid-90s I believe)!
It also says it was meant only for "battery-backup". High-performance computers were still recommended to use NMOS. I guess CMOS + SRAM was a deadly expensive combo in 1985.
And the databook is much more interesting than I already expected. In the Appendix, it even gave an overview of the techniques they used to solve the problem of DRAM soft error induced by alpha particle radiation!
I knew the phenomenon was first reported by Intel and well-known in the industry since early 1980, but I never expected to see it in a databook meant for the marketing...
@niconiconi this reminds me that I should really figure out how good of a logic analyzer (and oscilloscope) an ICE40 can make. (Olimex even sells 100Mhz ADC modules and everything)
I have an old Open Bench Logic Sniffer laying around but IIRC, its USB communication is done via serial at 115kbaud and it only has 256K of sample memory. good luck capturing much at the 200MHz maximum sample rate 😑
@ddipaola Anyway, even 256K sample memory should work in my simple project, although it may need to be triggered externally
I found the main difficulty when dealing with a retro parallel chip is the large number of wires. The speed is slow, not much higher than a 400 kHZ I2C/SPI, but good luck capturing the signal on 16 wires with a 2-channel scope...
Those homebrewers who were doing this in the mid-80s with a multimeter and a 2-LED-indicator logic probe alone are the Real Hardware Hackers.
@Neo_Chen Yes, I want to read debug messages from a standard USB serial converter. I'm now trying to output at 9600 baud. Does it work? No idea, but I will find out soon...🤔
@niconiconi Yep. I have a very interesting paper regarding supply power related secret side channels with recent (or not) integrated circuits : We MUST really care more about decoupling in motherboard design, and really have TRUE fully independant power supply path to cut those secret side-channels. Capacitor decoupling really not being enough any longer.
Enjoy and learn :
This one too :
@niconiconi cc @firstname.lastname@example.org I really encourage you to have a look at these two publications. It’s about a new generation of hardware backdoors and side-channels affecting FPGA’s and ASIC’s too. A must read to keep in mind these new shit while designing true free integrated circuit. It’s a fucking bad news again, but we’ll come over them collectively without doubts. Aware, toguether, we are stronger. Being aware is 50% of the solution...
@stman Daniel Genkin's research was also interesting. You may have heard of his work on EM/acoustic side-channels, but he also demonstrated the GROUND side-channel. The ground side-channel can even be detected across a human body, this attack can be a great cyberpunk Sci-Fi plot - the attacker finds out your private key by just touches the computer 🤣
Leakage! 💻 🔐 🔊 -----> 🎙️ 🕵️ 🖥️
Leakage! 💻 🔐 📶 -----> 📡 🕵️ 🖥️
Leakage! 💻 🔐 ⏚ -----> 🔌 🕵️ 🖥️
@niconiconi Oh boy. I've liked them when I use them, but I never feel I have the accuracy of a mouse. On the other hand, I've only used centre-ball models, not thumb models as I bought them to deal with thumb pain.
I wonder if you could rig up a foot pedle to a switch and use that to do the clicking?
@Canageek Thanks for the suggestion. Anyway, I think the first thing I need to do is open the documentation and start memorizing all the hotkeys...
@RafiX To be fair, even modern audio codec chips and sound chips usually come with a few GPIO pins (with lots of highly reconfigurable pins) as well.
But can your recall any other sound chips around the same era that have _16_ dedicated pins for GPIOs? I have only seen 8-pin maximum before... Comparing to the cost of large-scale integration, the cost of adding 16 extra pins was a low figure... I guess that's why they did this on 2608, which was meant to be an high-end PC chip...
@niconiconi I worked on a Game Boy game long ago. I was trying to figure out the fastest way to copy some data around (for a blitter type operation). If I remember correctly, I think I landed on using stack operations (pop) for the first half of the the copy operation. Fun times!
@cstanhope Yep. I've read many times that the fastest way to transfer data is using the stack operations. You may like this 6809 hacking story (if you haven't read it before), which took this technique to the extreme. http://blog.moertel.com/posts/2013-12-14-great-old-timey-game-programming-hack.html
@niconiconi That was a good read. I always wanted to have a reason to work on a 6809. From what I've read, it seems like it took 8-bit CPUs about as far as you could.
@niconiconi On the topic of Z80's, I ran across a Z80 based device today that I think is still in use, a Hirsch Model 2.
@bikecurious Z80 (and its derivatives) is still in production today, and is still used in a great deal of legacy industrial and embedded devices, you can purchase the brand new Z80 chips, which is pretty much the same CMOS chip from 1980s.
@EdS Cool, with this ICE, one can debug everything . While I don't feel that I need it for now, but I'll definitely add it to my toolbox. Thanks.
LB: Calculating Pi seems like a great way to put a computer through its paces for testing. Also, @gdr suggested a program which renders a fractal. Although they perhaps intended it more as a performance benchmark check, it nonetheless should exercise enough of the computer to test system validity.
@niconiconi This was semi-common in the 80s, at least Atari's POKEY also had I/O and audio jammed onto it.
Guessing that fabbing these VLSIs was expensive enough that they threw everything onto a single chip instead of making one for each function.
@ieure Also worth noting that many early VLSI companies are vertically integrated, so they could often piggyback the features needed for their own products to the next general-purpose chip. I guess it's also one of the reason behind this design.
@sirspate No. I don't know how their home computers / consoles did it, but arcade hardware typically mapped the four directions of the joystick (s) directly into CPU memory, one bit per direction.
POKEY supported potentiometer reading for paddles or a trak-ball, timers, and keyboard matrix scanning, all in addition to the sound generation.
the mastodon instance at cybre.space is retired
see the end-of-life plan for details: https://cybre.space/~chr/cybre-space-eol