FAMIS7450
Laser Status:
SysStat is good
Front End Power is 33.9W (should be around 30 W)
HPO Output Power is 154.8W
Front End Watch is GREEN
HPO Watch is GREEN
PMC:
It has been locked 20 days, 23 hr 9 minutes (should be days/weeks)
Reflected power = 17.42Watts
Transmitted power = 57.23Watts
PowerSum = 74.65Watts.
FSS:
It has been locked for 0 days 2 hr and 35 min (should be days/weeks)
TPD[V] = 1.875V (min 0.9V)
ISS:
The diffracted power is around 2.9% (should be 3-5%)
Last saturation event was 0 days 0 hours and 55 minutes ago (should be days/weeks)
Possible Issues:
PMC reflected power is high
Attached are trends for the HPO laser head flow rates for the last 4 days. Once again, no change since the last report, flows are holding steady.
This morning I completed the weekly PSL FAMIS tasks.
HPO Pump Diode Current Adjustment (FAMIS 8434)
With the ISS OFF, I adjusted the HPO pumpd diode operating currents. All currents were increase by 0.1 A; I have attached a screenshot of the PSL Beckhoff main screen for future reference. The changes are summarized in the below table:
Operating Current (A) | ||
Old | New | |
DB1 | 49.3 | 49.4 |
DB2 | 52.3 | 52.4 |
DB3 | 52.3 | 52.4 |
DB4 | 52.3 | 52.4 |
I did not adjust the operating temperatures of the diode boxes. The HPO is now outputting ~154.9 W; the ISS is now turned ON. This completes FAMIS 8434.
PSL Power Watchdog Reset (FAMIS 3662)
I reset both PSL power watchdogs at 15:52 UTC (8:52 PDT). This completes FAMIS 3662.
TITLE: 08/08 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC STATE of H1: Preventive Maintenance OUTGOING OPERATOR: Patrick CURRENT ENVIRONMENT: Wind: 7mph Gusts, 6mph 5min avg Primary useism: 0.47 μm/s Secondary useism: 0.07 μm/s QUICK SUMMARY: Maintenance has already started because of the EQ.
TITLE: 08/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Preventive Maintenance INCOMING OPERATOR: Jeff (covering for Travis) SHIFT SUMMARY: Remained in observing until a 6.5 mag earthquake in China knocked us out of lock. Left ISC_LOCK at DOWN and started maintenance early. ISI_CONFIG is set to LARGE_EQ_NOBRSXY. Issues (see earlier alogs): GraceDB query failure Possible mismatch on ops overview for ETMY RMS WD, current overview on video2 not the same as one currently linked on sitemap? ALS X fiber polarization Seismon not updating Tripped watchdogs from earthquake LOG: 12:32 UTC restarted video4 14:05 UTC Lockloss, 6.5 mag earthquake in China 14:17 UTC Chris to CER to work on scaffolding 14:32 UTC Karen opening receiving door 14:46 UTC Bubba to CER to work with Chris on scaffolding
~15:02 UTC Karen driving car to OSB receiving
M 6.5 - 36km W of Yongle, China 2017-08-08 13:19:49 UTC 33.217°N 103.843°E 10.0 km depth Lockloss at 14:05 UTC. It seems the System Time for Seismon stopped updating at 1186191618 GPS, Aug 08 2017 01:40:00 UTC, yet its System Uptime and Keep Alive are still updating. 14:10 UTC ISI platforms started tripping. Transitioned ISI config to LARGE_EQ_NOBRSXY. Tripped: ISI ITMY stage 2 ISI ETMY stage 2 ISI ITMY stage 1 ISI ITMX stage 1 ISI ITMX stage 2 ISI ETMY stage 1 SUS TMSY Leaving ISC_LOCK in DOWN. Starting maintenance early. 14:18 UTC Set observing mode to preventive maintenance.
I don't quite know what happened with seismon. I updated the USGS client yesterday and restarted that code, but I didn't make any changes to the seismon code. I've restarted the seismon_run_info code that we are using and that seems to have fixed it. Maybe the seismon code crashed when I added geopy yesterday?
Started appearing intermittently (see attached).
I am assuming I should not try to adjust the fiber polarization at this point (triple coincidence observing).
Have remained in observing. No issues beyond the GraceDB query failure to report.
TITLE: 08/08 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC STATE of H1: Observing at 53Mpc OUTGOING OPERATOR: Ed CURRENT ENVIRONMENT: Wind: 5mph Gusts, 4mph 5min avg Primary useism: 0.01 μm/s Secondary useism: 0.05 μm/s QUICK SUMMARY: GraceDB query failure (see previous alog). Talked to LLO and asked them to alert us when they receive notifications.
The Ops overview MEDM screen is showing a red block that reads 'GraceDB query failure' (see attached). I found the following wiki page: https://cdswiki.ligo-wa.caltech.edu/wiki/ExternalAlertNotification and followed the instructions to log in to h1fescript0 and run 'ps aux | grep exttrig'. It reports the following: root 787 0.0 0.0 77568 3636 ? Ss 23:41 0:00 sshd: exttrig [priv] exttrig 963 0.0 0.0 77568 1612 ? S 23:41 0:00 sshd: exttrig@pts/5 exttrig 964 0.9 0.0 26860 8112 pts/5 Ss 23:41 0:00 -bash exttrig 1217 0.0 0.0 18148 1260 pts/5 R+ 23:41 0:00 ps aux exttrig 1218 0.0 0.0 9380 940 pts/5 R+ 23:41 0:00 grep --color=auto exttrig exttrig 2386 0.0 0.0 28320 1640 ? S 2016 348:20 caRepeater exttrig 5449 0.0 0.0 26992 1464 ? Ss Jun20 0:00 SCREEN exttrig 5450 0.0 0.0 27416 8820 pts/8 Ss Jun20 0:00 /bin/bash exttrig 6107 0.0 0.0 129772 9276 pts/8 Sl+ Jun20 33:49 python ./viralert_epics_ioc.py This does not match what the wiki page says should be shown, but the wiki page was last updated in 2015, so maybe it is just not up to date? I suspect this is the reason that there was no verbal alert of the earlier GRB (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=38051)? As an aside, the screen that I brought up locally shows a red block near end Y that reads 'RMS WD'. This is not present on the one currently on the wall monitor. Looking at the ETMY suspension MEDM screen, to my eye I do not see anything out of place (see attached).
The wiki page does not give instructions for manually starting the code.
TITLE: 08/08 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
LOG:
Backscattering at ISCT1 has been a problem in the past ( https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35538) and we have installed new beam dumps (https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35636). To answer the question of whether backscattering from ISCT1 will be a problem at higher sensitivities (assuming the current beam dumping), we placed a rented large-amplitude shaker on the leg cross beams.
An estimate from the data shown in the plot suggests that, assuming linearity, the background motion of the table would produce noise at the level of 1.1e-19 m/sqrt(Hz) at 11.4 Hz (a coupling factor of 1.25e-11 m test mass motion per meter of table motion).
We double checked that the features associated with shaking in the figure were produced by scattering from the POP path by closing the beam diverter. The features did go away when we closed the beam diverter.
Philippe Nguyen, Sheila Dwyer, Robert Schofield
T Vo, Sheila
We measured a transfer function from PCAL X to DARM this afternoon. The attached screenshot shows a comparison of driving the ETMX ESD in the low noise state (with the settings that allowed us to transition to it from ETMY) and a drive to PCALX_DARM with the filters that Thomas copied from the ESD filter banks. The red trace (ref 14+15) were taken with a gain of -150 in the PCAL filter bank.
Based on this measurement, Thomas and I estimated this morning that we would need a gain of about -3000 in the PCAL X DARM bank to transition to PCAL X from the ESD, but that this would result in about 50 V rms on the OFS PD. (There is about -82 dB of gain between the OFS PD (in Volts) and the output of the PCALX DARM bank. This means that we would not have enough range on PCAL to simply replace the ESD with PCAL, so we have not tried the transition or locking ALS on ETMY.
If we aren't able to disconnect the ESD cables while in lock, we could look at using one of the ITM ESDs to transition from high voltage to low voltage on one ESD with the other disconnected. We also talked with Daniel, RIchard and Fil who had some ideas about different ways to try disconnecting the cables while in lock. We were about to try these when the EQ hit us.
Apologies, Intention bit off and then on again. There was a question about a Pcal line change that Sudarshan requested of me before returning to Observing but Sheila had turned all the lines off anyway.
02:40UTC Also, I just realized that the lights in the LVEA remained on for the last hour. Robert is going in to turn them off now.
01:37 We didn't get the verbal alarm ad Guardian didn't go into it's normal INJ_TRANS routine. This was done manually and a 1 hour stand-down is being observed. There were no changes being effected on the instrument at the time of LLO's verbal notification. There were some folks in the LVEA. H1 Intention Bit was still in Commissioning.
02:37 GRB 1 hr stand-down has expired.
WP 6577 Dave B., Carlos P., Bubba G., John W., Patrick T. I have migrated a subset of the EPICS channels provided by the FMCS IOC on h0epics to an IOC I created on fmcs-epics-cds. The IOC on fmcs-epics-cds connects to the BACNet server that Apollo has installed as part of the FMCS upgrade. The channels that I migrated have been taken over by this upgrade and can no longer be read out by the server that the IOC on h0epics reads from. The fmcs-epics-cds computer connects to the slow controls network (10.105.0.1) on eth0 and the BACNet network (10.2.0.1) on eth1. It is running Debian 8. The IOC on h0epics is started from the target directory /ligo/lho/h0/target/h0fmcs (https://cdswiki.ligo-wa.caltech.edu/wiki/h0fmcs). I commented out the appropriate channels from the fmcs.db and chiller.db files in the db directory of this path and restarted this IOC. I made no changes to the files in svn. The IOC on fmcs-epics-cds uses code from SNS: http://ics-web.sns.ornl.gov/webb/BACnet/ and resides in /home/cdsadmin/BACnet_R0-8. This is a local directory on fmcs-epics-cds. This IOC is started as cdsadmin: > ssh cdsadmin@10.105.0.112 cdsadmin@fmcs-epics-cds: screen Hit Enter cdsadmin@fmcs-epics-cds: cd /home/cdsadmin/BACnet_R0-8/iocBoot/e2b-ioc/ cdsadmin@fmcs-epics-cds: ../../bin/linux-x86_64/epics2bacnet st.cmd Hit CTRL-a then 'd' Issues: I came to realize during this migration that the logic behind the binary input channels is different in BACNet. In BACNet a value of 0 corresponds to 'inactive' and a value of 1 corresponds to 'active'. In the server being migrated from a value of 0 corresponds to 'invalid'. This was verified for the reverse osmosis alarm: H0:FMC-CS_WS_RO_ALARM. In the BACNet server it reads as 0 or 'inactive' when not in alarm. When John W. forced it into alarm it read as 1 or 'active'. I believe Dave has updated his cell phone alarm notifier to match this. A similar situation exists for the state of the chiller pumps. In the server being migrated from a value of 1 appears to correspond to 'OFF' and a value of 2 appears to correspond to 'ON'. It has not been verified, but I believe in the BACNet server a value of 0 corresponds to 'OFF' and a value of 1 corresponds to 'ON'. The pump status for each building is calculated by looking at the state of the pumps. The calculation in the database for the IOC being migrated from appears to be such that as long as one pump is running the status is 'OPERATIONAL'. If no pump is running the status is 'FAILED'. I need to double check this with John or Bubba. I updated the corresponding calc records in the database for the BACNet IOC to match this. In the server being migrated from, channels that are read by BACNet as binary inputs and binary outputs are read as analog inputs. I changed these in the database for the BACNet IOC to binary inputs and set the ONAM to 'active' and the ZNAM to 'inactive'. The alarm levels also need to be updated. They are currently set through the autoburt snapshot files that contain a separate channel for each alarm field. The autoburt request file has to be updated for the binary channels to have channels for .ZSV and .OSV instead of .HSV, .LSV, etc. So currently there is no control room alarm set for the binary channels, including the reverse osmosis alarm. I also need to update the medm screens to take account of this change. Also, there is an invalid alarm on the control room alarm station computer for the mid X air handler reheat temperature. Looking on the BACNet FMCS server this channel actually does appear to be genuinely invalid. It should be noted that this BACNet IOC is a temporary install until an OPC server is installed on the BACNet server. I would like to leave the permit for this work open until the FMCS upgrade is complete and all the channels have been migrated to the BACNet IOC.
I updated the autoBurt.req file.
I have set the alarm on the RO channel: caput H0:FMC-CS_WS_RO_ALARM.OSV MAJOR
I have updated the medm screens.
Note: The 'ARCH = linux-x86' line in the Makefile under 'BACnet_R0-8/iocBoot/e2b-ioc' had to be changed to 'ARCH = linux-x86_64' in the code copied from SNS.