WP 5862 and 5865 Filiberto and Richard moved the cabling for the PT170 and PT180 Inficon gauges from the second port on h1ecatc1 to h0velx. I updated the code on h0velx to read them. For some reason when the gauges were moved they stopped reading in Torr. I changed the reading back to Torr in the CoE parameters and sent the command to save the settings to non-volatile memory. The code change also added error channels and channels to set the amount of smoothing on certain channels. The amount of smoothing had been hardcoded. The smoothing on these channels is now at the default 0 until I set the values for these to what they were hardcoded to. I also changed the filter in the terminal for the CP2 LN2 level readback. On h0velx, terminal M17, CoE parameter 8000:15, I changed the filter from '50 Hz FIR' to 'IIR 7'. I removed all of the channels and terminals for the Inficon vacuum gauges from h1ecatc1, h1ecatx1 and h1ecaty1. They were temporarily on these computers until the new Beckhoff vacuum controls were installed. The channel names changes for the PT170 and PT180 gauges are: old name | new name H1:VAC-LX_X2_PT170_PRESS_OVERRANGE H0:VAC-LX_X2_PT170_MOD1_OVERRANGE H1:VAC-LX_X2_PT170_PRESS_SENSOR H0:VAC-LX_X2_PT170_MOD1_SENSOR H1:VAC-LX_X2_PT170_PRESS_TORR H0:VAC-LX_X2_PT170_MOD1_PRESS_TORR H1:VAC-LX_X2_PT170_PRESS_TRIP H0:VAC-LX_X2_PT170_MOD1_TRIP H1:VAC-LX_X2_PT170_PRESS_UNDERRANGE H0:VAC-LX_X2_PT170_MOD1_UNDERRANGE H1:VAC-LX_X2_PT170_PRESS_VALID H0:VAC-LX_X2_PT170_MOD1_VALID H1:VAC-LY_Y2_PT180_PRESS_OVERRANGE H0:VAC-LX_Y2_PT180_MOD1_OVERRANGE H1:VAC-LY_Y2_PT180_PRESS_SENSOR H0:VAC-LX_Y2_PT180_MOD1_SENSOR H1:VAC-LY_Y2_PT180_PRESS_TORR H0:VAC-LX_Y2_PT180_MOD1_PRESS_TORR H1:VAC-LY_Y2_PT180_PRESS_TRIP H0:VAC-LX_Y2_PT180_MOD1_TRIP H1:VAC-LY_Y2_PT180_PRESS_UNDERRANGE H0:VAC-LX_Y2_PT180_MOD1_UNDERRANGE H1:VAC-LY_Y2_PT180_PRESS_VALID H0:VAC-LX_Y2_PT180_MOD1_VALID Gerardo shimmed the LLCV for CP2 during this work. It is now back on PID control. The medm screens have not been updated on the Beckhoff vacuum controls computers other than h0velx. This still needs to be done.
Today my heart stopped when, after checking the Vertex pressure, I combined the Vertex RGA volume to that of the pump system to be used during its bake out via opening the valve that separated them and, a few minutes later, re-checked the Vertex pressure. I couldn't help noticing an indicated pressure increase of more than two decades! Not good for the old "TICKER"! It turned out that, by dumb coincidence, something that the CDS people were doing resulted in an incorrect pressure being indicated in the MEDM screen that I was viewing. This would be fine if the anomalous value was obviously bogus but instead it looked like a "real" pressure! Anyway, it wasn't clear to me that the activities described in the CDS WP were going to have this byproduct. These initial anomalous values have since been normalized but now PT120B and PT140B seem to have improved by a factor of 2. Does this mean that the old pressures were incorrect or are the new values incorrect?
Is there a way to independently check the readings outside of CDS? Sorry for the bad communication.
I noticed that I no longer needed to force the cold cathode on for PT140. This may be somehow related to the h0velx code change, but I'm not sure how. The PT120 mystifies me. It is on the h0vely computer that was not touched today.
The jumps are in the raw counts as well (see attached).
Attached are pressures for the gauges in the LVEA for the last 2 days.
Both the Pirani and Cold Cathode dropped for PT140. Only the Cold Cathode dropped for PT120 (see attached).
Summary:
Nonsense hardware manufacturing error.
5-coax multi connector on 36/45MHz WFS (originally ASAIR WFS) is mechanically rotated 180 degrees relative to 9/45MHz REFLAIR WFS, and the connection inside WFS head itself is apparently screwed up such that what is supposed to be connected to P1 is actually connected to P5, P2 to P4, and P3 to P3. In other words, the connector looks like the mirror image of what it should be. This is the issue for both of the two 36/45MHz WFS we have (S/N 1300512, which we used yesterday, and S/N 1300511, which was a spare on ISCT6).
This caused the test input on the WFS head to be routed to Q1 LF (i.e. quadrant 1, 36MHz) on the feedthrough. Nothing worked yesterday (alog 26942) because of this.
Instead of fixing WFS, I externally rerouted the cables.
In addition, the multi-coax connector on the WFS side is/was kaput (this was on S/N 1300512) so I had to use S/N 1300511.
Now RF works. Next step would be to connect up the demod.
Cabling/Connector issue details:
Each WFS has one 5-coax RF connector, and one 4-coax RF connector (and one DC interface connector).
4-coax thing is connected to Q3 LF, Q3 HF, Q2 LF and Q2 HF from right to left when you look at the WFS from the front, and both the cable and the cabling were good: All four coax cables were labeled and routed to correct feedthrough.
5-coax connector thing seemed to be a mirror image of what it should be as far as the cable routing was concerned:
Multi coax on WFS head | feedthgourh |
Q1 LO | test |
Q1 HI | HF4 |
Q4 LO | LF4 |
Q4 HI | HF1 |
TEST | LF1 |
Look at the first attachment showing the connectors on 9/45MHz WFS on the left and 36/45MHz WFS on the right, and you'll find that 5-coax multi connector is rotated 180 degrees on the 36/45MHz unit. We have two 36/45MHz units, and both exhibited this error. My guess is that this error was there when these units were shipped to LHO.
You can also see that the 4-coax multi connector on the 36/45MHz S/N 1300512 is broken.
I replaced the WFS with S/N 1300511, and externally rerouted:
Multi coax on WFS head | feedthgourh |
Q1 LO | LF1 |
Q1 HI | HF1 |
Q4 LO | LF4 |
Q4 HI | HF4 |
TEST | test |
RF TF measurements:
After the above mentioned fix, I measured the transfer function from outside of the table via feedthrough, and things made sense.
Second attachment shows the transfer function from test input to LF1, LF2, LF3, LF4, HF1, HF2, HF3, HF4, in this order.
Labeling issue:
BNC end of 5-coax multi connector have labels that don't quite make sense. I didn't quite figure out what kind of error this is, and I didn't fix this as I don't know what's the right thing to do. Anyway, right now they are as follows:
Multi coax on WFS head | label on BNS |
Q1 LO | test |
Q1 HI | H4 |
Q4 LO | L1 |
Q4 HI | H1 |
TEST | L4 |
I will never know how this happened, but obviously it did. 1. Is this the first time the in-air WFS chain has been used? 2. How did the connector get destroyed as seen in the photo? The sidewall of the connector is torn. 3. It is not hard to fix the connector flip, but the damaged connector will need to be replaced at this point. Please send the damaged units back to Caltech for repair (S1300512 and S1300501) and I will fix them myself.
The attenuated output from the front end, or seed, laser was admitted into the high power oscillator ring. No signs of clipping on the intermediate optics was observed. However the overlap between the beam promptly reflected from the oscillator's output coupler and the beam that traversed the ring was slightly off - the interference fringes were clearly left of centre. This was corrected for. The mis-alignment of the output coupler was most likely caused by the drag wipe cleaning the previous day. Each laser head was powered up with 5A of pump current. No bright spots were observed on the optics that would indicate some kind of point damage. Each head was powered up to 50A and again, no point damage spots were observed. The oscillator was then fully powered up, starting at 40A-45A per head. The laser power was noticeably down. The beam from the oscillator, shown in FirstTurnOn1.png, was ugly but stable. My interpretation of this was that there was no point damage on the optics but that the resonator was severely mis-aligned. The exact reason for why the resonator would have become mis-aligned is not clear to me. Adjusting the output coupler did improve the beam shape but not the output power. A more thorough alignment process will be embarked on tomorrow.
Kyle Multiple hiccups delayed the actual start of the bake and, as such, this exercise will now drag into Friday -> I am utilizing a second isolation valve via adding, in series, the turbo+gauge portion of a donor pump cart. I modified the wiring so that this redundant isolation valve closes on a fore line pressure set point. The nominal fore line isolation valve closes only upon the loss of AC to the scroll pump motor.
Ops Day Shift: 16:00-23:59UTC (08:00-3:59PT)
State of H1: PSL is down, hardware, software, and channel name changes made today
Activities today - general:
Activities - details:
Currently working on site, as of 4PM PT:
as of 5PM PT:
I noticed that the Yend ALS laser was not hitting its input pointing QPDs. While looking around, I saw that many beckhoff epics channels were zeros. I used the SDF interface to load the down_160502_115544.snap epics database (i.e., the new way of doing a burt restore), and things immediately came back.
This looks like it won't be necessary on the Xend.
The HWS camera and RCX CLINK were restored in the OFF state. I just restarted them:
aidan.brooks@cdsssh:~$ caput H1:TCS-ETMY_HWS_RCXCLINKSWITCH On
Old : H1:TCS-ETMY_HWS_RCXCLINKSWITCH Off
New : H1:TCS-ETMY_HWS_RCXCLINKSWITCH On
aidan.brooks@cdsssh:~$ caput H1:TCS-ETMY_HWS_DALSACAMERASWITCH On
Old : H1:TCS-ETMY_HWS_DALSACAMERASWITCH Off
New : H1:TCS-ETMY_HWS_DALSACAMERASWITCH On
I updated and restarted the slow controls SDF monitors using the RCG 3.0.2 code.
J. Kissel ECR: E1600118 FRS/II Ticket: 5307 WP: 5866. As my last duty serving on the O1 blind injection team, I've removed the blind injection front-end code infrastructure from the common library part, /opt/rtcds/userapps/release/cal/common/models/CAL_INJ_MASTER.mdl and from each of the top level of our local models, /opt/rtcds/userapps/release/cal/h1/models/ h1calcs.mdl h1calex.mdl and committed them to the repo. Thankfully, the only MEDM infrastructure that was ever created / used were the automatically generated screens from the RCG, so no work needs doing there. Note that this *gives back* two 16 kHz channels to the data rate pool. Nice! LLO need only update the CAL_INJ_MASTER.mdl part, and then remove any summation and tags from the top level of the corner / end-station model.
There have been reports of some issues with the new guardian log reading infrastructure. I have a suspicion that some of the problems might have been associated with the <a href="https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=26963">h1fs0 crash</a> this morning. In any event, I'm looking into the issue, and will report what I find.
And to be sure, please make sure you're using the latest version of the client, which is r1541. This new version was installed on Friday to fix a couple of issues. You may need to log out/in to refresh.
During the "make installWorld" part of RCG3.0.2 install the /opt/rtcds NFS server crashed (h1fs0). We reset h1fs0, but the NFS services did not come back cleanly. We restarted the nfs-server daemon and the services restarted correctly and the NFS clients reconnected.
Looking at the h1fs0 logs, problems were being reported starting at 09:05 PDT this morning.
We are restarting the install process and monitoring the error logs and disk usage carefully.
Collected the temperature and RH data from the two 3IFO Dry Boxes in the VPW, and the 3IFO desiccant cabinet in the LVEA. Relative humidity data for all three containers are fine (mean range between -0.71 and 3.29%). Temperature data shows a different story. There were several 20 plus degree swings in the VPW temperature during the first part of the month. The second half of the month the temperature swings were around 10 degrees.
Did a flow check and Zero Count test of all operating dust monitors (except in the PSL as those were checked at install). All DMs are performing normally.
Daniel, Patrick, Matt
We did a little more rotation stage science today. The objective was to understand the remaining acceleration mystery, and to confirm that the resistor was helping. The on-screen EPICS values are the ones being used for acceleration and deceleration, and they now have an upper limit of 65000 (or 65s to reach the maximum speed of 100 RPM). Note that the on-screen velocity is in units of 0.01% of the maximum, so a value of 10000 gives the maximum speed of 100 RPM, and a value of 100 gives 1 RPM. (These RPM values are presumably for the motor, not the waveplate.)
We found that with the current firmware settings (which Patrick will append), the 50 Ohm resistor was not necessary, so we removed it. This means that other waveplates in the field need no hardware modification to achieve the 0.01dg accuracy we are seeing with this rotation stage.
The attached screenshot shows a move from 10W to 2W (velocity = 3000, acc = 6000, dec = 6000) and then from 2W to 10W (velocity = 300, acc = 60000, dec = 60000). Note that the higher values of acceleration and deceleration for lower velocities result in a smoother ride.
Current settings attached.
A couple of diagnostics features have been added to the code:
I reduced the calibration velocity from 3000 to 500. Driving too fast towards the home position seems to reduce its reproducibility. This test will have to be repeated by looking at the laser power.
The TCS rotation stages also got the new motor settings and can be tested. The TCS medm screens need to be updated as well. (why are they diffeerent?)
J. Kissel Continuing the work of Corey et al. have done cleaning up SDF files, (see LHO aLOG 26917), I've gone one level deeper to ensure that all snap files used in the target areas are soft links to locations in the userapps repo. There *is* a safe.snap for every front-end model / epics db, of which there are 129. Unfortunately, because they're human construction, there are less OBSERVE.snaps (112) and down.snaps (28). OBSERVE.snaps at least exist for every front-end model / epics db that existed during O1. However, weather station dbs, dust monitor dbs, and pi front-end models are new since O1, so OBSERVE.snaps don't exist for them. Further, down.snaps seem to have only been created for ISC models, the globally controlled SUS models, and the ISC-related beckhoff PLCs. We know the safe.snaps are poorly maintained, and sadly we haven't been in a configuration we'd call OBSERVE.snap worth in a long time, so they're also out of date. On top of all this, each subsystem seems to have a different philosophy about safe vs. down. Daniel, Sheila, Jamie, and myself were discussing this on Friday, we'd come to the conclusion that it is far too difficult to maintain three different SDF files. If the SDF mask is built correctly, then there should be no difference between the "down" and "safe" state. The inventors of the "safe" state are the SEI and SUS teams because they have actuators strong enough to damage hardware. As such, they've designed the front-end models such that all watchdogs come up tripped and user intervention is required to allow for excitations. So, as the model comes up, it's already "safe" regardless of it's settings. Of course, even though the IFO is "down" at that starting point, we still want the platforms to be fully isolated. So, in that sense, for the ISIs "down" is the same as "OBSERVE." And again, if all settings that change via guardian are correctly masked out, then "safe" is the same as "down" is the same as "observe" and you only need one file. So, eventually -- we should get back to having only one file per subsystem. But this will take a good bit of effort to make sure that what's controlled via guardian is masked out of every SDF, and vice versa, that what is masked out of SDF *is* controlled by guardian. The temporary band-aid idea, will be at least to make sure that every model's down is the same as it's safe. Because Corey et al. put a good bit of effort into reconciling the down and safe.snap files today, I've copied all of the down.snap's over to the safe.snaps and committed them to the repo. I've not yet gone as far as to change the safe.snap softlinks to point to the down.snaps, but that will be next. Anyways -- this aLOGs kinda rambling, because this activity has been disjointed, rushed, and sporadic, but I wanted to get these thoughts down and give an update on the progress. In summary, at least every safe, down, and OBSERVE.snap in the target area is a soft link to the userapps repo, and all of those files in the userapps repo are committed. More tomorrow, maybe.
Thanks for the write-up here! A couple of comments/notes:
1) Does every frontend really have a safe.snap? I thought I could not find some safe.snaps for some of the ECAT (i.e. slow control) frontends. Or is there a way for the SDF Overview medm to not display *all* SDF files?
2) If we manage to get to ONE SDF file, what' will we name it? Will we stay with "safe" since that's what the RCG calls out, or will we change it to a name more preferred (this was another subtle note I overheard you all discussing on Fri.)
~21:01 UTC I turned off the camera, frame grabber, then powercycled the computer (then turned the frame grabber and the camera back on). Only HWSX code is running at the moment. Things look good for now.
May 3 16:44 UTC Stopped HWSX code and ran HWSY code alone. HWSX code had been running fine since yesterday.
May 5th 18:20 UTC I noticed HWSY code stopped running. There has been many comuter and front end restart since I left it running so it was unclear what caused it to stop. I reran it again and going to leave it again for another day.