Friday, December 22, 2017

VAX utility for changing process accounting info.


  One of my VAX sites was an engineering firm that used the computer 
to do work for many different clients. There were often dozens of 
different customers' projects in house at the same time, and some 
of the design engineers were  simultaneously involved with several 
of them.

  The management of this firm needed accounting of computer use broken
down on a project by project basis. VMS provides an accounting
utility, but it is more oriented toward tracking the use of the
computer on an individual user account basis. Each login account has
an account string associated with it, and accounting data is keyed off
of this string. To make use of this accounting in a project oriented 
environment, it would necessary for a user to log out, and log in with
another username (with a different account string in the UAF) to begin
charging activity to another project. 

  Logging out, and then logging in again under another account was an
undesirable solution from just about any standpoint. For one, it takes
time to get logged back in on a busy machine, since process 
creation/image activation is not what VMS is good at. For another, it 
makes use of MAIL and PHONE for office communication next to 
impossible, since you never know what login a person will be using at 
any given time. This solution also was not received very well by the 
user community, and was mostly ignored - people tend to worry more 
about getting the work done rather than logging in and out constantly 
to get the bean counting right.

  To address this need we evaluated several third party VAX/VMS
accounting products, but the quality of the packages was highly
variable. The ones that did perform as advertised were massive
overkill for our needs, being very expensive to purchase, and very
complicated to set up and maintain. We decided to look into
implementing our own form of project accounting, using as much of the
standard VMS accounting environemnt as possible.

  I experimented with the $SNDJBC system service, which enables a
program to write records to the system accounting file. It is easy
enough to use, alright, but what it does is write out "user"
accounting data, which the normal ACCOUNTING command refuses to
report on. This means that you have to write a program to read the
accounting file, find all of your records, and then write reports off
of them. 

  Well, this approach was really starting to smell more like a job for
an application programmer (and that ain't me). I also resented having
my accounting records treated like second class citizens by the
ACCOUNTING utility. In order to avoid having to write a report
program, I started daydreaming about exotic VMS internals type
solutions (as I often do when I am confronted with the prospect of
real work). I realized that all I needed was a little piece of a
normal logout to occur - the part that writes a process termination
message in the accounting file. In like wise, I needed a little part
of a normal login to occur - the part that sets up a new string for
the account, and the fields that record usage for a process zeroed
out. These two functions would be sufficient to change projects
without the hassle of logging out and back in. 

  It took a liitle time sitting in front of the micro fiche reader,
but I found the code that writes the accounting record when a process
terminates. It is done by a KERNEL mode JSB call to an executive
routine called EXE$PRCDELMSG, presumably mnemonic for Process Deletion
Message. I was grateful that this was broken out into a subroutine,
rather than being buried in the body of the process deletion code.

  This routine takes one argument - R5 must contain either 0 or the
address of a block of non paged pool to be deallocated. In this
utility, we have nothing to deallocate, so R5 is cleared before the
call. It is also necessary to set the final status for the process
before you call the subroutine - else the status of the last image to
terminate will go into the accounting record. The status is in the P1
space of a process, at location CTL$GL_FINALSTS. My code just puts an
SS$_NORMAL status there, but if needs be, you could have it store a
status of your own choosing there. This might be useful if you want to
be able to tell a project change from a real process deleteion, for
instance. 

  That took care of the simulated logout. To simulate a fresh login,
accounting wise, I had to find the cells in the process and job data
structures where the accounting information is stored. A look in the
appendices of "VAX/VMS Internals and Data Structures" by Kenah and
Bate was enough to find them. They are listed below. 

  These next five labels are P1 space addresses (NOT offsets)

CTL$GL_VOLUMES number of volumes mounted
CTL$GQ_LOGIN login time, in VMS quadword date format
CTL$T_ACCOUNT process account string 
CTL$GL_WSPEAK peak working set size 
CTL$GL_VIRTPEAK peak virtual page count

  The following label is an offset from the Job Information Block

JIB$T_ACCOUNT Job account string

  The following labels are offsets from the Process Header

PHD$L_IMGCNT count of images this process has activated
PHD$L_CPUTIM cpu time used
PHD$L_PAGEFLTS count of page faults incurred
PHD$L_PGFLTIO count of page fault I/Os performed
PHD$L_DIOCNT number of direct I/Os performed
PHD$L_BIOCNT count of buffered I/Os done


  To simulate a fresh login, all that was necessary was to load the 
account string in the two account fields, and to zero the rest of 
them. The account string fields are not like most of the text fields 
you will find in VMS data structures, in that they are neither counted 
ASCII or descriptor data types - they are just eight characters in a 
row. 

  All that was needed to complete the utility was a user interface, to
allow the users to enter a command to change projects. LIB$GET_FOREIGN
is used to input a new account string. This string is checked for
length, to make sure it is eight characters or less in length. If it
is shorter than eight characters, it is padded with spaces. At my
site, this is sufficient. Some sites will need to add additional
validation of the the input string to make certain that it is a valid
project code, or that this person can charge to it, or whatever. 

  Here's the source for SETACCOUNT.

setaccount.mar

  To use SETACCOUNT, first assemble and link it

$ MAC SETACCOUNT
$ LINK SETACCOUNT /NOTRACE

 For the average run o' the mill users to use this, it has to be installed
with CMKRNL privilege. You'll have to decide for yourself if your site is
OK with this requirement.

$ INSTALL/OPEN/HEAD/SHARE/PRIV=CMKRNL SETACCOUNT.EXE


  Then, define a foreign symbol to invoke it, specifying a command 
name of your choice, and an appropriate directory specification.

$ SETACCOUNT:==$somedisk:[somedir]SETACCOUNT.EXE

  To change projects, simply use the command. If you enter the command 
with no new account string, you will be prompted for one.

$ SETACCOUNT JOB709

$ SETACCOUNT
Enter account string...>DIREWOLF

  If the string entered is too long, an error message is printed, and
no accounting information is altered.

$ SETACCOUNT THISISWAYWAYTOOLONG
LIB-F-INPSTRTRU,input string truncated


  After each execution of the command, a new accounting record will be
written. The ACCOUNTING utility is then usable to produce reports by
project. I should point out that the SETACCOUNT utility will not
change the accounting information of any subprocesses that are in
existence when the utility is invoked. All subprocesses created after
the use of the SETACCOUNT command will, however, inherit the current
correct accounting information (they will acquire it from the
JIB$T_ACCOUNT field in the Job Information Block). If your site has
applications that use a lot of subprocesses that are created early
on and don't terminate until logout, then this utility might not be
appropriate for you. Enabling the change of all subprocesses of a
process when the SETACCOUNT command is issued is a little more
involved (it involves queueing AST's to other processes), and was
more of a solution than we needed.

Thursday, December 21, 2017

VMS Binary File Editor

A long long time ago, when the Earth was young, and we rode our 
dinosaurs to work every morning, I worked as a system manager at a large 
VAX site. It was around 1986 and we had a pretty large cluster, that we
used, along with some other local VAXes, to support around 8,000 engineers
and office workers all over the world. 

  One day, I got a call form the folks that managed the accounts on the
system. They allowed as to how they were getting errors when they tried to
add any new accounts to the UAF. Next, the Help Desk called to relate that
lots of people couldn't log in anymore. I found this...alarming. A little
checking soon revealed that the UAF was corrupt. It apparently had a bad
block in the middle of it, and RMS was not well pleased when it tried to
read it. 

  I tried the usual RMS fixes. First, on the theory that maybe the bad
block was in the middle of a secondary index, I tried to convert the file
to a sequential file, and, if that had worked with no errors, I could have
then converted it back to an indexed file, with no loss of data. No soap -
it couldn't successfully convert to a sequential file. 

  I thought about the old RMS trick of "patching" around the bad bucket.
That can make an RMS file readable again, but, had a pretty good chance of
losing some records. Losing random records out of the UAF did not appeal
to me as a solution.

  I considered restoring from backup, but, the backup had been done 
Friday, and it was Monday afternoon now - it was a busy place and a lot
of work had been done since then (at the time, the sun never set on this 
engineering firm). Accounts had been added and deleted, Identifiers had
been granted  and revoked, last login times updated, passwords changed -
well, you know how it is - lots of changes. 

  Using the backup would have been a very large pain in the sitz-platz. 
But I got to thinking - a lot of the UAF doesn't change all that much from 
day to day - the odds were good that the bad block, be it a user record or 
a piece of metadata,  had occurred in a spot that hadn't changed since
the last backup. Finding what block was bad was trivial - I DUMPed the
file and it keeled over and told me when it hit the bad block. Then I used
DUMP to DUMP all the rest of the file starting after that block, to make
sure only one block was bad. All I needed then was a program to read that
block out of the UAF from the Friday backup, and update it into the current
production one, writing over the bad one (well, bad block relocation would
take place, but I wasn't worried about that low level for this problem -
functionally, the bad block got overwritten). 

  So....that's what I did. I wrote a block IO program that read block
number X out of the good backup file, and updated it into block X of the
bad production file. I held my breath and did a CONVERT...it succeeded. A
little testing with UAF showed it was all good to go now - the failures
the accounts folks were seeing didn't happen anymore. All of the hard 
working engineers and office workers could log in again. The phones didn't 
ring off the wall with people asking what happened to the changes from the
last three days. All was well again in Whoville and the phone stopped
ringing. 

  But the whole mess made me think that I ought to have a utility on hand 
and ready to go that could easily read, write, edit and block copy data
around for any future situations such as the above - something a little
more general purpose than the fixup program I used that time. I also had
need of a utility that could do binary edits on files, that was easier to
use than PATCH/ABSOLUTE. 

  ZAP was that program. I named it after the famous RSX11 ZAP program, 
that was a brilliant hack that turned ODT in RSX into a file editor with 
the addition of just a few lines of code.

  ZAP will let you edit files character by character, in hex or ASCII. It 
will allow you to copy blocks around inside a file, as well as copy a 
block or blocks from one file and write them into another file. ZAP is one 
of a very few programs I wrote in Fortran instead of Mcaro-32, so, by 
happy coincidence, it is also one of the very few programs I have written 
that will fork on Alphas (and likely Itaniums, although I haven't tested it 
one one) as well as VAXes.

  Here's the sources for ZAP

build.txt

zap.for

screen_init

ufo.for

read.for

write.for

format_line

fresh.for

  To build...

Rename build.txt to build.com (Google sites won't let me upload a file
with the extension of ".com"...) and the execute it.

$ rename build.txt build.com
$ @build

  To use

$ zap :== $disk:[directory]zap.exe
$ zap somefiletozap.ext

   Or, just run zap

$ run zap
  And you will be prompted for what file you want to edit.


  The leftmost panel in ZAP has a command summary. Here's what a ZAP
session looks like. 





And here it is in ASCII mode






  Basically, in any block, use the cursor keys to move around. When you
reach the bottom or top of the screen, the block will scroll up or down as
needed within the current block. It will not scroll into next block. To
change a value, position on it, then enter the new value. If you are in
HEX mode, and want to enter a new value, entry must be two digits (leading
zeros are required. To write any changes you make to the file, press the
DO key or GOLD-W before leaving the block (several functions have two key
sequences that can perform them, since not all keyboards have DO, Select,
and other DEC terminal specific keys). Note that the hex mode display is formatted
like a VMS dump  - the lower addresses are on the right, increasing as you go
to the left. ASCII mode is like text, it goes from left to right.  Blocks that are copied
go into a temporary file, so you can copy blocks from a file, close that session,
start ZAP on another file, and past those blocks into it.

Tuesday, December 19, 2017

VAX utility for changing page protection on pages in system space.


  Back in the day, I was involved in a project that needed to make a small 
routine located in non-paged pool accessible from all processes on a
system. The problem was, that  non-paged pool pages are protected at
ERKW - Exec Read, Kernel write. My routine needed to execute in User mode,
and thus could not work in those pages. I needed a routine to alter the
protection of the pages that the code resided in.
  
  I wrote a little utility that would allow me to examing and change page
protection settings from DCL. It's a simple thing, really - it gets a
command line, parses it with TPARSE, and then looks up the existing page
protection in its PTE. If a new protection was specified on the command
line, it is updated. If not, it just prints out the existing value. 

  The syntax is simple...

  Print the page protection for an address.

$ aprt 81000000
Page 81000000 protection = URKW 

  Print the page protections for the pages beteen address1 and address2

$ aprt 81000000:81000400
Page 81000000 protection = URKW
Page 81000200 protection = URKW
Page 81000400 protection = URKW 

  To modify a page...

$ aprt 81000000/prot=urew

Page 81000000 protection = URKW 


   To modify a range of pages...

$ aprt 81000000:81000400/prot=urew

Page 81000000 protection = URKW  
Page 81000200 protection = URKW  
Page 81000400 protection = URKW 

Note that the protection listed is the protection BEFORE the change is 
applied


  The page protection can have values of...

NA                   ;no access
RESERVED    ;invalid protection - never used
KW                  ;kernel write
KR                   ;kernel read
UW                  ;user write
EW                  ;executive write
ERKW             ;exec read kernel write
ER                   ;exec read
SW                  ;supervisor write
SREW             ;supervisor read exec write
SRKW             ;supervisor read kernel write(bet this is never used)
SR                   ;supervisor read
URSW             ;user read supervisor write
UREW             ;user read exec write
URKW             ;user read kernel write
UR                   ;user read

  Now, I gotta warn ya - this utility is intended for people who know what 
they are doing. You can jam up your system mighty quick if you set page 
protections "funny". I would be particularly cautious about changing the 
protection of pages that don't have write access to having it - I'm not 
sure what backing store would get used if the page faulted.... so proceed 
with caution...and as always, proceed at your own risk.

  Here's aprt.mar

APRT.MAR

  To build the program...
$ mac aprt
$ link aprt
$ aprt :==$disk[directory]aprt.exe

  You need to substitute the disk and directory spec where aprt.exe is located.

Wednesday, September 20, 2017

Utilities to alter Files-11 attributes on VMS and RSX


  Files on most systems these days are just a bag of bytes. Just bytes in 
a row, maybe with an occasional Carriage Return and/or Line Feed character
thrown in to provide some sort of notion of records or readability. 

  That's not the way RSX and VMS did things. Bytes are stored in a file, 
sure enough, but files have metadata associated with them (and often extra
bytes in the file as well) to support the concept of records. Several
types of records are supported - undefined, variable length, fixed size,
and like that. Carriage control for records isn't totally left to embedded
CR/LFs - it too is an attribute, supporting None, Fortran style, printer
control style, or implied (if you are displaying the file, a CR/LF gets
automatically displayed on output). VMS/RSX also have varying levels of
support for files with different sorts of random and keyed access built in
- sequential, relative and indexed. Interestingly enough, the system
definitions indicate that there was thought of adding a Hashed file type,
but it was never developed. 

  One thing is pretty certain - when a file gets transferred to/from a VMS 
or RSX system and one of these other types of systems, something is going 
to get hurt. Record lengths, types and carriage control are almost certain 
to go wrong or go away in the process.

  I wrote the ICONV utility back in 1988 to deal with this on VMS. Since 
then, VMS has added a SET FILE /ATTRIBUTE command to deal with these 
problems, so I  only occasionally have to use ICONV, when working on older 
versions of VMS. These days, I work a lot more on RSX than VMS, so I
recently coded up a version of ICONV for RSX to help fix transfer damage
there. 


iconv.mac

  To build the RSX version of ICONV
>mac iconv=iconv
>tkb
TKB>iconv=iconv
TKB>/
Enter Options:
TASK=...ICV
//

  To use...
>INS ICONV
or
run iconv
ICV>file /sw:val /sw:val

 Switches are:
 /HE prints this help text
 /LI prints the file attributes
 /VE prints out program version
 /ORG:val values are SEQUENTIAL, RELATIVE, INDEXED, or HASHED
 /REC:val values are UNDEFINED,FIXED, VARIABLE, SEQUENCED or STREAM
 /CC:val  values are NONE, FORTRAN, CARRIAGE_RETURN or PRINT
 /BLK or -/BLK - sets or clears the "records can't span
    blocks" flag
 /MRS:val value is decimal size of maximum record size
 /RSZ:val value is decimal size of records

  Like I said above, ICONV for VMS is useful only on older VMS systems, 
but, waddahell, I'm including it here in case someone needs to adjust 
attributes on an old VAX somewhere. It's also a middlin' good example of 
how to use TPARSE, and how to programmatically change file attributes.

iconv.mar

iconv.hlp

  To build...

$ mac iconv
$ link iconv
$ iconv :== $diskx:[diry]iconv.exe

then,

$ iconv z.tmp/type=fixed  etc...

or $ iconv
or $ run iconv
Yes?>

to be prompted for the command line.

  If you want to use the built in help in ICONV, build the help library

$ libr/create/help iconv.hlb
$ libr/insert iconv.hlb iconv.hlp


 Then define a logical ICONV_HELP to point to the location of the HLB file

$ def/system iconv_help dua0:[somwdir]

 The help info is invoked by, logically enough the help command

$ iconv help                                         

ICONV

    This is ICONV, a program for changing the attributes of a file.
    Use at your own risk - this is serious business.

    Format:


      ICONV input-file-spec



  Additional information available:

  Parameters Command_Qualifiers
  /TYPE      /ORG       /CC        /SPAN      /RSIZE     /VFCSIZE   
/MAXREC
  /CREDATE   /BAKDATE   /REVDATE   /EXPDATE

ICONV Subtopic?

  Regarding both of these versions of ICONV...you are assumed to know what 
you're doing...either one of them could be used to scramble up a file's 
attributes pretty thoroughly...so be careful...

Sunday, August 13, 2017

Utility to read RX50 Teledisk images on RSX systems


The DEC RX50. Has there ever been a more disliked peripheral in the DEC world? Well, of course there has - the TK50. But we're talking about the RX50 today.

  The RX50 is a dual 5-1/4 inch floppy drive that found its way on to a lot of DEC gear in the 80s. RX50s were used on the DEC Professional computers (325s, 350s and 380s), on DECMates and on lots of smaller PDP11 and VAX systems.

  A lot of software from that era (including a lot of software for DEC machines on RX50s) wound up being archived by a program called "Teledisk". Teledisk was a pretty capable utility for saving floppies to files. It had a lot of bells and whistles for preserving all sorts of arcane floppy formats and hardware quirks. Teledisk is no longer sold or maintained, and the old versions you can find on the web run only under DOS on slow PCs (and, I hear, assorted emulators).

  I occasionally run across RX50 images that have been preserved via Teledisk (usually, files with extension .td0 are Teledisk images).  Not all of these are available elsewhere as simpler images. I could have gotten an old version of Teledisk and an old PC or emulator - but, I'm DEC blue through and through - I'd rather eat a spoonful of dirt than use a solution like that.

  Instead, I wrote an RSX utility that can read Teledisk images of RX50s and write them out as a disk image in LBN format - the format used by VCP (the RSX virtual disk program), LD disks on VMS,  and simh virtual disks. It can also write the image in track and sector order, if you have need of that.

  RTD (for ReadTeleDisk) is written to work only with RX50 images, and has only been tested with RX50s that had a Files-11 (that is, RSX or VMS) file structure. If anyone needs support for something else let me know and I'll add it in the next version.

  Note - I just found out that there is an "advanced" version of teledisk, that produced file images of RX50s that RTD can't read. I'm working on a new version than can read these advanced images.

RTD.MAC

To make...
mac rtd=rtd
tkb
TKB>rtd=rtd
TKB>/
Enter Options:
TASK=...RTD
//

To use...
>run rtd
RTD>outfile=infile
or install it
>ins rtd
>RTD outfile=infile
outfile extension defaults to .dsk, infile extension to .td0

Switches are /LBN and /TS. /LBN is the default and outputs the dsk file in logical block order. /TS outputs in ascending track, sector order. /LBN is the default because that's what you'll want to use the dsk file as a simh virtual disk or a VCP virtual disk. /TS order is for...I don't know what it's for. Can't think of any use for it, but, I included in case it's ever needed.

  So, you copy your .td0 image file to your RSX system (via DECnet, FTP binary, or whatever). You use RTD to create the DSK file, and then mount and read it using VCP or simh. What could be simpler?

 The info about the format of the teledisk file I needed to write this utility was found in an article by Dave Dunfield, at http://www.classiccmp.org/dunfield/img54306/teledisk.htm.

Friday, April 14, 2017

RSX utility to convert RT11 dump file to virtual disk

  Recently, someone asked me if I could read some Fortran files off of an old 8 inch floppy diskette for them. I have a couple of old DEC PDT-11/150 systems here, that have 8 inch single density drives on them, so, figuring anything's possible,  I said I'll give it a try. It turned out that they were from a DEC system (good), and were single density (even better). But, they were from an RSX-11 system, in Files-11 ODS-1 format. Not so good - my PDT-11/150s run DEC's RT-11 operating system -  a completely different animal when it comes to disk formats. Reading the files off of it by a few simple commands was pretty much right out.

  But, it occurred to me that I had need of a way to read all of my old RT-11 floppies and save them somewhere, because floppy diskettes ain't diamonds (they're not forever) - so writing a utility  to read this fellow's diskette would be of use to me. Besides, I'll use any excuse to write a MACRO-11 program.

  My PDT-11's don't have DECNET on them (I hear DECnet-RT is rare, and very doubtful it would fit on a diskette based system). I've had nothing but bad luck trying to get Kermit to reliably transfer files from them. They don't have  ethernet, only serial ports, so anything ethernet based wasn't going to play.

  I decided, what the hell, I'll do a screen capture of the output of a DUMP command and then translate it back into a virtual disk file that I could read with a SIMH RSX system, and from there transfer them via TCP/IP to a PC, from whence I can mail them to the guy who needed them. 

  Weeks went by and I finally completed DCN (Diskette CoNvert). DCN will take as input a screen capture of the output of an RT-11 DUMP/TERMINAL command from an RX01 or RX02 diskette, and reassemble it into a DSK file that SIMH can digest. It sounded simple, but was complicated by DEC's interleave and skew of the tracks and sectors, and the fact that DEC doesn't use track 0 on RX01s and RX02s. Anyway, it got done.

  To use DCN, copy DCN.MAC to an RSX system. Assemble and link it.

>MAC DCN=DCN
>TKB
DCN=DCN
/
TASK=...DCN
//

Install it and type DCN
>INS DCN
> DCN
DCN>

or just run it

> RUN DCN
>DCN>

  Obtain a dump file of a diskette on an RT-11 system by screen capturing the output of a DUMP command

.DUMP/TERMINAL DX0:

  Enter a command, with the input file as the dump from the RT-11 system, and the DSK filename as the output spec.

DCN>outfile.dsk=infile.dmp

  .DMP is the default extension for input files, and .DSK is the default for output files, so

DCN>outfile=infile 

would work just as well.

  DCN supports two switches - /DD stands for double density, and indicates that the dump file came from an RX02.

DCN>outfile/DD=infile

or

DCN>oufile=infile/DD

either way, doesn't matter This won't change an RX01 dump into an RX02 dump - it just alerts DCN how to size the sectors and the output file.

  The other switch is /VE - it causes DCN to print the version number and then exit

DCN>/VE
Version V01A02
>. 

  OK I agree, it's damned unlikely anyone else will ever need this program - but, it's a pretty good example of how to write an RSX FCS file IO utility that uses CSI switches, so there should be some value in that...


Monday, March 20, 2017

The Big Ugly Old Thing

 A while back, my wife and I added a room on to our house to house my computers and serve as an electronics and radio lab. My wife designed it and it's a lovely airy space, with some nice architectural touches and plenty of light from skylights and windows.

  She was a little nonplussed at how it looked after I had moved in all of my gear. The racks of old computers didn't bother her - the steel breadracks filled with spare parts, half completed projects and assorted electrical and mechanical doodads did, however. 

  She fashioned some very nice curtains for the racks that really cleaned up the look of the place. Proud of her work, she started inviting her friends over to see what she'd done to spruce up the room..

  She reported how the first visitor took it. Her old friend Kathy was quite taken with the place and how well the "mess" had been covered up by the curtains. She however, apparently did not care for the look of the racked computers. While my wife was looking at something else, Kathy, staring at a rack of PDP-11/05s that had seen better days, said "Why don't you get rid of the big ugly old thing?". My wife didn't notice she was looking at the 11/05s and assumed she was talking about me. "Oh, I can't get rid of him now - we've been together too long" she answered. So, apparently, I'm  safe. But now I'm referred to as "the big ugly old thing"....

Wednesday, March 8, 2017

T11/6522


 The T-11 chip (code named"Tiny" at DEC) was a middlin' complete implementation of a PDP11 system on a single chip. It was designed to be used in embedded systems, controllers and the like. 


  I've been using handbuilt single board systems here to control assorted projects. Unlike most electronics hobbyists, I don't use Arduinos or Raspberry PIs - things are retro here. I like to say that my gear is all state of the art - for 1980. Typically I had been using Motorola 6802 CPUs+RAM, an EPROM and a Rockwell 6522 chip to handle IO. These have been working out pretty well, but were very limited by the 100 bytes or so of RAM available in the 6802. It occurred to me that going forward, with just an additional chip or two, I could build a T-11 based controller, that had plenty of RAM and could be programmed in MACRO-11, a language I had decades of experience using. And, it would still be deliciously retro.

 A little browsing found the work of Pete McCollum. A goodly while back Pete McCollum designed and built an impressive one board PDP11 system based on the T-11. His design showed how to make a minimal 8 bit static memory system that avoided the complication that a full 16 bit dynamic memory system would have entailed. For me, the most valuable part of his project was that it made clear how to use the RAS and CAS T11 lines. I designed a similar board that was customized to my needs. 


DEC T11 CPU
27C256 32KB EPROM
62256 32KB static RAM (only 24 KB is decoded and used)
Rockwell 6522 VIA chip - 16 bits of I/O, two timers, a shift register and four handshaking lines.

    Memory map
                 0000-1FFF - RAM
                 2000-3FFF - RAM
                 4000-5FFF - RAM
                 6000-7FFF - RAM
                 8000-9FFF - ROM Start Address = 8000
                 A000-BFFF- ROM
                 C000-DFFF- ROM
                 E000-FFFF - 6522

     
  For address decoding, I departed from the almost univerally used 74LS183 3-to-8 decoder chip. I used a 74LS156 dual 2 of 4 decoder. It can be wired as a 3-to-8 decoder and used like the 74LS138 to dice up the 64 KB address space into 8 8KB chunks, but it has the
advantage of having open collector outputs. Open collector outputs enabled me to "wire-or" these 8 KB pages together to easily select larger spaces - I wire-or'ed 4 of them together to select the 32 KB EPROM, and wire-or'ed three together to address 24KB of the RAM chip. The remaining 8KB line I used to select the 6522. It's a bit wasteful of address space to use 8KB for a single device that only has 4 registers, but it was simple and sufficient to my needs. 


  I had anticpated trouble getting the 6522 chip to work with the T-11. The 6522 is from the Rockwell 6500 family of chips. and expects some 6500 style synchronization to occur via its PHI2 pin. Fortunately, the T-11 was designed to work with common microprocessor support chips, and all that was required was to connect the COUT T-11 line to the 6522 PHI2 line. The 6522 chip has an IRQ line that can be used to signal the CPU that a variety of conditions have occurred, such as a timer counting down  to 0, or the shift register filling up. The T-11 supports the full PDP11 vectored interrupt architecture, but optionally can be much simpler - instead of reading vectors during an interrupt, four CPU lines can be used for four different interrupts. I connected the 6522 IRQ line to the CP0 line on the CPU. It's open collector, so I had to pull it up with a resistor. When the 6522 interrupts, software will query the 6522 to see which of 7 different interrupt events caused the interrupt.

  There are a number of configuration options available on the the T-11. They are read at startup from the address and data lines. At power on, signal BCLR selects a 74LS367 so the options can be read. When startup is complete, BCLR goes high, and the 74LS367 goes into a tri-stated condition, which prevents it from interfering with normal operation.

  The T-11 includes internal pullups on these lines, so we only need to ground the lines that need to be zero - a one will be read on the lines that aren't connected.

  Here's the options I needed...8 bit, static RAM, constant speed clock, User (not test) mode,start address 10000 (octal),standard length microcycles, normal R/W signalling. Below are the settings needed to get those options.

   Bit   Effect              Value  T-11 pins
    0 - constant clock      0     DAL 0
    1 - Longcycle            1
    2 - 7 - rsrvd
    8 - normal/delayed    0     DAL 8
    9 - Static or dyn        1
   10 - N/A              Don't Care
   11 - 8/16                    1
   12 - user/test             1
   13 - 15 Start addr        001    DAL14 and 15

   

  The T-11, like other DEC LSI11 devices, multiplexes the data bus with part of the address bus. A 74LS373 is used to separate and buffer the lines involved.


  So, given the above, I produced a schematic and soldered up a prototype using perfboard and IC sockets. That's how I usually make my other controllers. This took longer due to more chips involved. It also turned into a pain in the sitzplatz when the ROM socket I used, for some reason, refused to take solder on about half of its pins. I cleaned them, fluxed them, sanded them and begged for them to accept the solder and wires, but no go. I finally managed to mechanically attach some wires to them and then solder the connections to the wire. When complete, the board ran fine, but, due to the ROM socket issues, was very sensitive to vibration. By now I was in no mood to repeat that process, so I bit the bullet and did my first PC board design. I used Express PCB to layout the board and got a few boards made. A little pricy, but it saved a whole lot of time and aggro, so I think it was worth it. I checked and quadruple checked the layout before I sent it off, and I was pleasantly surprised when the boards worked - I figured I would at least have to run some of those green ECO wires to fix something I missed, but it wasn't required.

  So, now I've got running T-11s. I've hooked them up to some hex readouts and an HD44780 display for testing. Next thing I'm going to do is write a tiny little OS for it.

T11/6522 schematic

Saturday, February 25, 2017

It's In the Code

  Many years ago, when the Earth was young and we slew the dinosaurs with our mighty slide rules, there was a thing called  a DECUS Symposium. They were like a week long Grateful Dead concert, except they were aimed at people who were interested in DEC computing rather than at Dead Heads. There were the equivalents of rock stars (developers), excited fans  grooving together over the true word (attendees), and even the equivalent of the parking lot vendor scene (DEXPO, held next door to the Symposium). And there was plenty of intoxication at both types of events. Many thousands of people with like interests from all over the world came to share their enthusiasm for the universe that was all things DEC.

  I was lucky enough to attend several of these get-togethers (DECUS Symposia, not Grateful Dead concerts - well, actually, I was lucky enough to attend several of those as well, but that's another story).

  DECUS Symposia during the day consisted of many presentations, called "Sessions". Presented by DEC, other companies marketing products and services, or dedicated fans of DEC, they covered a myriad of subjects. Some interesting, others not so much. Some all dry technical material, others lighter and with a touch of humor (especially the Q&A sessions where DEC developers were asked about...curious..things the company had done or not done).


  Sessions at night tended to be...looser. The most (in)famous of these were the RSX Magic Sessions. These sessions were attended by the hard core RSX folk, from inside and outside of DEC. If you imagine a cross between "Alice in Wonderland" and "Where the Wild Things Are" you'll have a notion of the atmosphere that prevailed at these. And beer. There was a lot of beer. In the early days, you had to answer a question about how the RSX Exec worked to prove you were worthy of admission, although, sadly, this requirement was omitted in later years.

  In any case, at one of these convocations, DEC RSX expert Brian McCarthy presented an absolutely hysterical presentation he called "It's In the Code". It consisted of examples from the RSX source code of strange and funny things that were in the source code and attending comments. I don't recall many of the details, as I had consumed much beer at the time, but, I was reminded of this a few days ago while reading the RSX clock queue handling code, when I ran across a funny comment. For your amusement...

*************************************************************************************
        MOV     $TKPS,R5    ;;; Get the ticks/second
        MUL     #10.,R5         ;;; multiply * 10
        CMP     (R4),R5         ;;; are we in range?
        BLO     7$                   ;;; if LO, yes, finish the interrupt
;+
;  If we cannot process a clock interrupt within 10 seconds, we are
;       no longer processing in real time, and we may as well become
;       a VAX ... Call an end to this ... NOW!
;-

        BGCK$A  BF.SAN,BE.IDC,<FATAL>   ;;; System massively confused


************************************************************************************

  it was during a period of time when the RSX clan couldn't resist throwing a little shade on the VAX/VMS folks whenever the opportunity presented itself. Ah, good times...we'll not see their like again...


Tuesday, January 31, 2017

MACRO-11 List File Parser for embedded PDP11s




  Recently I've been working on a controller to use on assorted electronics hobby projects. I considered all the usual solutions - Raspberry PIs, Arduinos and the like. But that technology is not really where I'm at. I like solutions are are state of the art tech...for the 60s,70s and 80s.

   I decided to use some old DEC T-11 chips I had kicking around for the CPUs - my needs for compute power and memory are modest, and I can write MACRO-11 code in my sleep. The T-11 chip was a DEC product designed for use in embedded products. It implements a pretty durn complete PDP11 on one chip, and has options that enable it to work with a minimum amount of support circuitry. The T-11 doesn't include any IO capability, so I needed a chip to handle that. I had no requirement for terminal IO so I didn't need a UART. I decided to use one of my all time favorite chips for IO - the Rockwell 6522 Versatile Interface Adapter. The 6522 has 16 lines for IO, two timers, and four handshaking lines. It will handle any of my projects with ease. 

   So, after lengthy design and prototyping sessions, I have T11/6522 systems to play with. Now, since all hardware projects inevitably turn into software projects, I needed some code to make it go. I started out with my trusty old PDP-11/70 programmer's card (my old pal Dr. Bob said that these should have included some blister packed Valium on the back of them).  Assembling code by hand and entering it into my EPROM programmer's editor was beyond tedious, and modifying the code once it was written was even more of a challenge. To make it a little easier, I started coding the programs in MACRO-11 on an RSX system, and then reading the part of the listing file that contained the octal bytes generated. Here's an example. The generated bytes are in the 3rd, 4th and 5th columns. I could read them from there, translate to hex, and enter them in the editor screen for my EPROM programmer. Lots easier than hand assembling everything. 


     25 ;+
     26 ;  set 6522 port A & B lines
     27 ;-
     28 100000 z::
     29 100000 005003                                  clr             r3
     30 100002 012706 077760                    mov         #77760,sp
     31 100006 112767 000377 077667 movb #377,ddra
     32 100014 112767 000377 077660 movb #377,ddrb





This was easier, but was still pretty tedious - lots of reading, converting and typing to do. Time for more automation. I thought about reading and converting the object files from the assembly, but that looked like a lot of work. Instead, I wrote LISPAR.MAC, a utility that can read a listing file, parse out the memory columns, convert them to hex and output them as an Intel Hex record, ready for my EPROM programmer to digest.

  LISPAR is a pretty standard sort of RSX family utility - it uses GCML and CSI to get command input, FCS to do the file IO, and TPARSE to parse the input file. I've included it on the very remote chance that someone out there is working on PDP11 bare metal and doesn't already have a better cross assembler or other solution. I should point out that lispar is only useful on simple programs - it won't do anything useful if it encounters system calls or directives that don't emit anything (like .blkb for instance) or anything too hairy. It can deal with macros, but you'll need to add a .LIST MEB to the top of the program to get the binary expansions of the macros to appear in the listing.

  To make...
  >mac lispar=lispar
  >tkb
  TKB>lispar=lispar
  TKB>/
  Enter Options:
  TASK=...MLP
  //

  To use...
  >mac ,somefile=somefile
  >ins lispar
  >run lispar (or install it and invoke as MLP).
  MLP>outfile=somefile
  The output file will be in Intel Hex format.
  Input file default extension is .lst. Output file default extension is .hex
  The Intel Hex records are set to load at address 0. If you need another address, use the /adr switch.
  MLP>outfile=infile/adr:1000