Friday, December 22, 2017

VAX utility for changing process accounting info.


  One of my VAX sites was an engineering firm that used the computer 
to do work for many different clients. There were often dozens of 
different customers' projects in house at the same time, and some 
of the design engineers were  simultaneously involved with several 
of them.

  The management of this firm needed accounting of computer use broken
down on a project by project basis. VMS provides an accounting
utility, but it is more oriented toward tracking the use of the
computer on an individual user account basis. Each login account has
an account string associated with it, and accounting data is keyed off
of this string. To make use of this accounting in a project oriented 
environment, it would necessary for a user to log out, and log in with
another username (with a different account string in the UAF) to begin
charging activity to another project. 

  Logging out, and then logging in again under another account was an
undesirable solution from just about any standpoint. For one, it takes
time to get logged back in on a busy machine, since process 
creation/image activation is not what VMS is good at. For another, it 
makes use of MAIL and PHONE for office communication next to 
impossible, since you never know what login a person will be using at 
any given time. This solution also was not received very well by the 
user community, and was mostly ignored - people tend to worry more 
about getting the work done rather than logging in and out constantly 
to get the bean counting right.

  To address this need we evaluated several third party VAX/VMS
accounting products, but the quality of the packages was highly
variable. The ones that did perform as advertised were massive
overkill for our needs, being very expensive to purchase, and very
complicated to set up and maintain. We decided to look into
implementing our own form of project accounting, using as much of the
standard VMS accounting environemnt as possible.

  I experimented with the $SNDJBC system service, which enables a
program to write records to the system accounting file. It is easy
enough to use, alright, but what it does is write out "user"
accounting data, which the normal ACCOUNTING command refuses to
report on. This means that you have to write a program to read the
accounting file, find all of your records, and then write reports off
of them. 

  Well, this approach was really starting to smell more like a job for
an application programmer (and that ain't me). I also resented having
my accounting records treated like second class citizens by the
ACCOUNTING utility. In order to avoid having to write a report
program, I started daydreaming about exotic VMS internals type
solutions (as I often do when I am confronted with the prospect of
real work). I realized that all I needed was a little piece of a
normal logout to occur - the part that writes a process termination
message in the accounting file. In like wise, I needed a little part
of a normal login to occur - the part that sets up a new string for
the account, and the fields that record usage for a process zeroed
out. These two functions would be sufficient to change projects
without the hassle of logging out and back in. 

  It took a liitle time sitting in front of the micro fiche reader,
but I found the code that writes the accounting record when a process
terminates. It is done by a KERNEL mode JSB call to an executive
routine called EXE$PRCDELMSG, presumably mnemonic for Process Deletion
Message. I was grateful that this was broken out into a subroutine,
rather than being buried in the body of the process deletion code.

  This routine takes one argument - R5 must contain either 0 or the
address of a block of non paged pool to be deallocated. In this
utility, we have nothing to deallocate, so R5 is cleared before the
call. It is also necessary to set the final status for the process
before you call the subroutine - else the status of the last image to
terminate will go into the accounting record. The status is in the P1
space of a process, at location CTL$GL_FINALSTS. My code just puts an
SS$_NORMAL status there, but if needs be, you could have it store a
status of your own choosing there. This might be useful if you want to
be able to tell a project change from a real process deleteion, for
instance. 

  That took care of the simulated logout. To simulate a fresh login,
accounting wise, I had to find the cells in the process and job data
structures where the accounting information is stored. A look in the
appendices of "VAX/VMS Internals and Data Structures" by Kenah and
Bate was enough to find them. They are listed below. 

  These next five labels are P1 space addresses (NOT offsets)

CTL$GL_VOLUMES number of volumes mounted
CTL$GQ_LOGIN login time, in VMS quadword date format
CTL$T_ACCOUNT process account string 
CTL$GL_WSPEAK peak working set size 
CTL$GL_VIRTPEAK peak virtual page count

  The following label is an offset from the Job Information Block

JIB$T_ACCOUNT Job account string

  The following labels are offsets from the Process Header

PHD$L_IMGCNT count of images this process has activated
PHD$L_CPUTIM cpu time used
PHD$L_PAGEFLTS count of page faults incurred
PHD$L_PGFLTIO count of page fault I/Os performed
PHD$L_DIOCNT number of direct I/Os performed
PHD$L_BIOCNT count of buffered I/Os done


  To simulate a fresh login, all that was necessary was to load the 
account string in the two account fields, and to zero the rest of 
them. The account string fields are not like most of the text fields 
you will find in VMS data structures, in that they are neither counted 
ASCII or descriptor data types - they are just eight characters in a 
row. 

  All that was needed to complete the utility was a user interface, to
allow the users to enter a command to change projects. LIB$GET_FOREIGN
is used to input a new account string. This string is checked for
length, to make sure it is eight characters or less in length. If it
is shorter than eight characters, it is padded with spaces. At my
site, this is sufficient. Some sites will need to add additional
validation of the the input string to make certain that it is a valid
project code, or that this person can charge to it, or whatever. 

  Here's the source for SETACCOUNT.

setaccount.mar

  To use SETACCOUNT, first assemble and link it

$ MAC SETACCOUNT
$ LINK SETACCOUNT /NOTRACE

 For the average run o' the mill users to use this, it has to be installed
with CMKRNL privilege. You'll have to decide for yourself if your site is
OK with this requirement.

$ INSTALL/OPEN/HEAD/SHARE/PRIV=CMKRNL SETACCOUNT.EXE


  Then, define a foreign symbol to invoke it, specifying a command 
name of your choice, and an appropriate directory specification.

$ SETACCOUNT:==$somedisk:[somedir]SETACCOUNT.EXE

  To change projects, simply use the command. If you enter the command 
with no new account string, you will be prompted for one.

$ SETACCOUNT JOB709

$ SETACCOUNT
Enter account string...>DIREWOLF

  If the string entered is too long, an error message is printed, and
no accounting information is altered.

$ SETACCOUNT THISISWAYWAYTOOLONG
LIB-F-INPSTRTRU,input string truncated


  After each execution of the command, a new accounting record will be
written. The ACCOUNTING utility is then usable to produce reports by
project. I should point out that the SETACCOUNT utility will not
change the accounting information of any subprocesses that are in
existence when the utility is invoked. All subprocesses created after
the use of the SETACCOUNT command will, however, inherit the current
correct accounting information (they will acquire it from the
JIB$T_ACCOUNT field in the Job Information Block). If your site has
applications that use a lot of subprocesses that are created early
on and don't terminate until logout, then this utility might not be
appropriate for you. Enabling the change of all subprocesses of a
process when the SETACCOUNT command is issued is a little more
involved (it involves queueing AST's to other processes), and was
more of a solution than we needed.

Thursday, December 21, 2017

VMS Binary File Editor

A long long time ago, when the Earth was young, and we rode our 
dinosaurs to work every morning, I worked as a system manager at a large 
VAX site. It was around 1986 and we had a pretty large cluster, that we
used, along with some other local VAXes, to support around 8,000 engineers
and office workers all over the world. 

  One day, I got a call form the folks that managed the accounts on the
system. They allowed as to how they were getting errors when they tried to
add any new accounts to the UAF. Next, the Help Desk called to relate that
lots of people couldn't log in anymore. I found this...alarming. A little
checking soon revealed that the UAF was corrupt. It apparently had a bad
block in the middle of it, and RMS was not well pleased when it tried to
read it. 

  I tried the usual RMS fixes. First, on the theory that maybe the bad
block was in the middle of a secondary index, I tried to convert the file
to a sequential file, and, if that had worked with no errors, I could have
then converted it back to an indexed file, with no loss of data. No soap -
it couldn't successfully convert to a sequential file. 

  I thought about the old RMS trick of "patching" around the bad bucket.
That can make an RMS file readable again, but, had a pretty good chance of
losing some records. Losing random records out of the UAF did not appeal
to me as a solution.

  I considered restoring from backup, but, the backup had been done 
Friday, and it was Monday afternoon now - it was a busy place and a lot
of work had been done since then (at the time, the sun never set on this 
engineering firm). Accounts had been added and deleted, Identifiers had
been granted  and revoked, last login times updated, passwords changed -
well, you know how it is - lots of changes. 

  Using the backup would have been a very large pain in the sitz-platz. 
But I got to thinking - a lot of the UAF doesn't change all that much from 
day to day - the odds were good that the bad block, be it a user record or 
a piece of metadata,  had occurred in a spot that hadn't changed since
the last backup. Finding what block was bad was trivial - I DUMPed the
file and it keeled over and told me when it hit the bad block. Then I used
DUMP to DUMP all the rest of the file starting after that block, to make
sure only one block was bad. All I needed then was a program to read that
block out of the UAF from the Friday backup, and update it into the current
production one, writing over the bad one (well, bad block relocation would
take place, but I wasn't worried about that low level for this problem -
functionally, the bad block got overwritten). 

  So....that's what I did. I wrote a block IO program that read block
number X out of the good backup file, and updated it into block X of the
bad production file. I held my breath and did a CONVERT...it succeeded. A
little testing with UAF showed it was all good to go now - the failures
the accounts folks were seeing didn't happen anymore. All of the hard 
working engineers and office workers could log in again. The phones didn't 
ring off the wall with people asking what happened to the changes from the
last three days. All was well again in Whoville and the phone stopped
ringing. 

  But the whole mess made me think that I ought to have a utility on hand 
and ready to go that could easily read, write, edit and block copy data
around for any future situations such as the above - something a little
more general purpose than the fixup program I used that time. I also had
need of a utility that could do binary edits on files, that was easier to
use than PATCH/ABSOLUTE. 

  ZAP was that program. I named it after the famous RSX11 ZAP program, 
that was a brilliant hack that turned ODT in RSX into a file editor with 
the addition of just a few lines of code.

  ZAP will let you edit files character by character, in hex or ASCII. It 
will allow you to copy blocks around inside a file, as well as copy a 
block or blocks from one file and write them into another file. ZAP is one 
of a very few programs I wrote in Fortran instead of Mcaro-32, so, by 
happy coincidence, it is also one of the very few programs I have written 
that will fork on Alphas (and likely Itaniums, although I haven't tested it 
one one) as well as VAXes.

  Here's the sources for ZAP

build.txt

zap.for

screen_init

ufo.for

read.for

write.for

format_line

fresh.for

  To build...

Rename build.txt to build.com (Google sites won't let me upload a file
with the extension of ".com"...) and the execute it.

$ rename build.txt build.com
$ @build

  To use

$ zap :== $disk:[directory]zap.exe
$ zap somefiletozap.ext

   Or, just run zap

$ run zap
  And you will be prompted for what file you want to edit.


  The leftmost panel in ZAP has a command summary. Here's what a ZAP
session looks like. 





And here it is in ASCII mode






  Basically, in any block, use the cursor keys to move around. When you
reach the bottom or top of the screen, the block will scroll up or down as
needed within the current block. It will not scroll into next block. To
change a value, position on it, then enter the new value. If you are in
HEX mode, and want to enter a new value, entry must be two digits (leading
zeros are required. To write any changes you make to the file, press the
DO key or GOLD-W before leaving the block (several functions have two key
sequences that can perform them, since not all keyboards have DO, Select,
and other DEC terminal specific keys). Note that the hex mode display is formatted
like a VMS dump  - the lower addresses are on the right, increasing as you go
to the left. ASCII mode is like text, it goes from left to right.  Blocks that are copied
go into a temporary file, so you can copy blocks from a file, close that session,
start ZAP on another file, and past those blocks into it.

Tuesday, December 19, 2017

VAX utility for changing page protection on pages in system space.


  Back in the day, I was involved in a project that needed to make a small 
routine located in non-paged pool accessible from all processes on a
system. The problem was, that  non-paged pool pages are protected at
ERKW - Exec Read, Kernel write. My routine needed to execute in User mode,
and thus could not work in those pages. I needed a routine to alter the
protection of the pages that the code resided in.
  
  I wrote a little utility that would allow me to examing and change page
protection settings from DCL. It's a simple thing, really - it gets a
command line, parses it with TPARSE, and then looks up the existing page
protection in its PTE. If a new protection was specified on the command
line, it is updated. If not, it just prints out the existing value. 

  The syntax is simple...

  Print the page protection for an address.

$ aprt 81000000
Page 81000000 protection = URKW 

  Print the page protections for the pages beteen address1 and address2

$ aprt 81000000:81000400
Page 81000000 protection = URKW
Page 81000200 protection = URKW
Page 81000400 protection = URKW 

  To modify a page...

$ aprt 81000000/prot=urew

Page 81000000 protection = URKW 


   To modify a range of pages...

$ aprt 81000000:81000400/prot=urew

Page 81000000 protection = URKW  
Page 81000200 protection = URKW  
Page 81000400 protection = URKW 

Note that the protection listed is the protection BEFORE the change is 
applied


  The page protection can have values of...

NA                   ;no access
RESERVED    ;invalid protection - never used
KW                  ;kernel write
KR                   ;kernel read
UW                  ;user write
EW                  ;executive write
ERKW             ;exec read kernel write
ER                   ;exec read
SW                  ;supervisor write
SREW             ;supervisor read exec write
SRKW             ;supervisor read kernel write(bet this is never used)
SR                   ;supervisor read
URSW             ;user read supervisor write
UREW             ;user read exec write
URKW             ;user read kernel write
UR                   ;user read

  Now, I gotta warn ya - this utility is intended for people who know what 
they are doing. You can jam up your system mighty quick if you set page 
protections "funny". I would be particularly cautious about changing the 
protection of pages that don't have write access to having it - I'm not 
sure what backing store would get used if the page faulted.... so proceed 
with caution...and as always, proceed at your own risk.

  Here's aprt.mar

APRT.MAR

  To build the program...
$ mac aprt
$ link aprt
$ aprt :==$disk[directory]aprt.exe

  You need to substitute the disk and directory spec where aprt.exe is located.