As part of the Virgo Alert System, I have created a new pycaspy IOC running on h1fescript0 which runs the EPICS channels TJ's python script writes to.
Full details in the wiki
[PaulM, JoeM, Jenne]
Most of the Newtonian noise sensors that had to be removed for the May vent are back. We did not do the ~3 that are north of the HAM4 north door - I'll do that quickly next Tues.
This morning I executed WP #7044 . The supplies were placed on the floor directly behind the rack/chassis that they are powering
Several oplev maintenance items were completed today.
Swap ETMx OpLev Laser
I swapped the ETMx oplev laser as the old laser was glitching and there was no more adjustment room with the laser power to follow the glitch-free zone. The new laser SN is 130-1, old laser SN was 106-1. The output power of laser SN 130-1 was set to the same point used in the Pcal lab (using the Current Mon port on the back of the laser): 0.793 V. I will test laser SN 106-1 in the Pcal lab in the LSB to see if it is still useful or if a return for refurbishment is necessary.
Re-center ETMx, ITMx, and SR3 OpLevs
After the May vent the alignment of these 3 optics changed slightly, requiring re-centering of their oplevs. This has now been completed.
Power Adjustment for ITMy and ETMy OpLevs
Both of these oplev lasers were showing signs of very slight mode-hop glitches, so I adjusted the output power of both lasers to return to a glitch-free operating power. I used the Current Mon port on the back of the lasers to monitor the power increase (the port outputs a voltage). The adjustments were:
All of the effected lasers (ETMx, ITMy, ETMy) will need a few hours to return to thermal equilibrium, therefore I will assess whether or not they are still glitching and require further adjustment later this afternoon. This completes WP 7042. Should further adjutments to these lasers be required, I will open a new work permit.
Jim, Krishna
We moved the c-BRS close to the ITMY chamber (roughly the same position relative to the chamber as the previous position relative to ITMX) (see attached photos with Jim for scale). I unlocked the beam-balance, centered it and hooked up the fiber optics and the electronics. After wondering why we weren't getting light on the photodiodes, I realized I had to to actually power the laser for that. doh.
The instrument is functioning normally now, though the drift in the beam-balance position is high, as expected. It will settle slowly over the next few days. It is back under guardian control which is also working well. To compensate for the drift, which is currently in the opposite direction than normal, I have changed the setpoints of the Piezo1 Offset and Piezo2 Offset to 90,000 and -20000 (as compared to the normal value of 110,000 and -40000 respectively). Once the drifts normalize, these offsets could be returned to their normal values.
Per FRS 7691 and WP7052 I looked at the weather station temperature sensor. There is and intermittent fault on the signal that bothers detChar so I re terminated the wiring in the hopes this fixes the problem. Will have to trend the data for some time.
todo:
- Sheila, Cheryl
Restarted all nuc computers in control room per Carlos's request. 15:09 UTC Karen to end Y (cleaning) 15:15 UTC Hugh and Chris to mechanical room 15:17 UTC Took ISC_LOCK to DOWN for Jason to start PSL FAMIS tasks 15:19 UTC Jim and Krishna to LVEA (compact BRS) 15:22 UTC Hugh HEPI CS 15:23 UTC Dick and John to LVEA 15:32 UTC Christina leaving end X 15:33 UTC Sudarshan to end X (PCAL) 15:35 UTC Filiberto to CER, slow controls chassis 6 15:39 UTC Kyle to LVEA to retrieve vacuum pump 15:40 UTC Jason done 15:46 UTC Carlos to CER to restart work station 15:49 UTC Hugh to end stations (HEPI) 15:51 UTC Pep, Ed and Paul to end Y (ESD power supply) 15:57 UTC Gerardo to end Y (pull cable) 15:58 UTC Karen leaving end Y 16:00 UTC Peter transitioning LVEA to laser safe 16:05 UTC Jason to end X to swap and recenter optical lever laser 16:10 UTC Richard to CS roof to re-terminate weather station temperature sensor 16:11 UTC Carlos back 16:11 UTC Karen to LVEA (cleaning) 16:11 UTC Peter done transitioning LVEA to laser safe. Peter to optics lab. 16:18 UTC Alex Urban restarting GDS calibration pipeline 16:19 UTC Alex done 16:27 UTC Rick to end X to work with Sudarshan 16:28 UTC Richard back 16:30 UTC Krishna back 16:37 UTC Krishna back to LVEA 16:40 UTC Dave making CALCS model change 16:44 UTC Hugh back from end stations, to LVEA to move roaming seismometer 16:47 UTC Keita taking SURF students on tour through LVEA 16:55 UTC Pep, Ed and Paul back 16:57 UTC Sheila and Cheryl to PSL enclosure to pull in cables for GigE camera 17:00 UTC Filiberto done 17:05 UTC Hugh back, Ed to end Y, LN2 delivery through gate 17:14 UTC Jenne, Joe, and Paul to beer garden to glue accelerometers on floor 17:17 UTC Kiwamu aligning PRM and SRM 17:21 UTC Paradise water delivery 17:29 UTC Jason back, to LVEA to recenter ITMX, SR3 and power adjust ITMY 17:32 UTC Filiberto to end Y to help Gerardo 17:35 UTC Kiwamu to LVEA to check cabling around ISC rack 3 (near HAM6) 17:39 UTC Hanford fire through gate 17:41 UTC Changed power to 30 W for Cheryl 17:42 UTC Ace toilet service through gate 17:43 UTC Changed power back to 2 W 17:46 UTC Sudarshan out of end X 17:55 UTC Chandra done checking AIP 18:01 UTC Ed back (was back earlier) 18:05 UTC Keita done with tour, Cheryl back 18:06 UTC Jason done in LVEA, going to end Y 18:11 UTC Chandra WP 7054 18:15 UTC Jenne, Joe and Paul back
Joe Hanson at LLO sent me a photo into their HEPI Fluid Reservoir interior so I wanted to check ours and did so this morning under WP 7040. I was unable to capture an image but my observation would be that there was a little light dust-appearing patch (few cm2) on the surface and a few globules (drop size) on the walls of the vessel. I think the surface patch could actually be dust on the surface and the globules on the sides are dried out fluid as the reservoir level has gone through level cycles. This was at the Corner Station. It saw no evidence of either at EndY and just the a small surface patch on the fluid at EndX. What I saw was nothing like the image from LLO that I'll attach in a moment. Have closed WP 7040.
The primary and redundant h(t) pipelines were restarted at GPS second 1182010698. This pipeline restart is not accompanied by any filter changes, but does pick up gstlal-calibration-1.1.7. For more information on this version of the code, please see its redmine page.
For more information about the filters currently being run, please see the following aLOG entries:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36864
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=36842
The corrected filter file needed for LHO Work Permit Number 7047 had already been staged on the DMT on June 14. Thus, this restart of the calibration code picked up the corrected filter file, and this completes LHO Work Permit Number 7047.
Correction to this alog: the pipeline restart does not affect the version of gstlal-calibration running, since version 1.1.7 was already running at Hanford. However it does pick up corrected filters, but the file pointing to those filters had the same name so no command line change was needed. I apologize for the confusion.
I performed the weekly PSL FAMIS tasks this morning.
HPO Pump Diode Current Adjust
All pump diode currents were increased by 0.1A, new and old currents summarized in the table below. The first attachment shows a 15 day minute-trend of how the DBs have decayed since the last current adjustment, while the 2nd is a screenshot of the main PSL Beckhoff screen for future reference.
Operating Current (A) | ||
Old | New | |
DB1 | 48.6 | 48.7 |
DB2 | 51.5 | 51.6 |
DB3 | 51.5 | 51.6 |
DB4 | 51.5 | 51.6 |
I looked at and optimized the DB operating temps as well. I changed the temps on all the diodes in DB1 from 28.0 °C to 29.0 °C; the operating temps of the other 3 DBs remained unchanged. The HPO is now outputting ~157.7 W. This completes FAMIS task 8427.
PSL Power Watchdog Reset
I reset both PSL power watchdogs at 15:35 UTC (8:35 PDT). This completes FAMIS task 3655.
TITLE: 06/20 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Cheryl (Patrick covering at beginning of shift)
SHIFT SUMMARY:
A bit of a rough shift with 0.44Hz oscillation seen with ASC PIT signals and on oplevs (primarily ITMy).
So had a few hours of CORRECTIVE MAINTENANCE (No FRS has been filed.)
LOG:
LLO called about GRB Alert. In one hour stand-down for the alert. LHO did not receive notice via Verbal Alarms.
There has not been a GRB since June 18 11:25 UTC according to GraceDb, so it looks like LLO received a false alarm.
Tagging OpsInfo for a reminder: L1500117 says to confirm that this is not a maintenance test, but maybe it should also include to confirm that is a real event. Please make sure to check GraceDb after receiving one of these alerts to make sure it is not a test and it is a current event.
In the past week we have had two timing alarms:
Monday 6/12 01:19 UTC [Sunday 6/11 18:19 PDT) | EY GPS |
Saturday 6/17 23:15 - 23:24 UTC (16:15 - 16:24 PDT) | MSR Master GPS |
The first was a short (one minute) alarm from EY GPS (CNS-II). I trended all the EPICS channels on this system and only found that the Dilution of Precision (DOP) and the Receiver mode channels showed any variation at this time (number of satellites locked did not change). Does this mean it is a real or bogus error?
The second was a longer alarm (9 minutes) with the MSR master's GPS receiver (I think the internal one). The only channels in alarm were "GPS locked", the the MSR comparator channel-01 (MFO GPS). This would suggest a real problem with the Master's GPS receiver?
Time to pass this over to the timing group for further investigation (Daniel, Zsuzsanna, Stefan?)
These are error messages seem real, but are non critical. The internal GPS is only used to set the GPS date/time.
Sheila, Pep The attached plots show the comparison between H1:GDS-CALIB_STRAIN and the OpLev channel (H1:SUS-ETMY_L3_OPLEV_SUM_OUT_DQ) for different glitch events, found in HVeto in this day: https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20170419/detchar/hveto/. We also show a comparison between these channels for times when the OpLev was not glitching. To get these results, we have assumed that the only force on the mirror is the pressure radiation force of the OpLev laser. The equation is: F=2*P/c=m*a, where P is the power of the laser, c is light's velocity, m is the mass of the mirror (taken as 40 kg) and a is acceleration. In order to find the power of the laser, we have to use the relation between the counts detected by the QPD and the power, which is (DCC T1600085): P = Counts * (40 [V] / ((2^16) - 1)) * (1 / Transimpedance [Ohms]) * (1 / Whitening Gain) * (1 / Responsitivity [A/W]) = Counts * 7.6295e-09 for the H1:SUS-ETMY_L3_OPLEV_SUM_OUT_DQ channel. The transimpedance values can be found in DCC T1600085, the whitening gain values in dB can be found here in DCC T1500556-v4. These values have to be converted from dB using 10^(x/20). The responsitivity is 0.4, and can be found here: http://www.hamamatsu.com/us/en/product/alpha/S/4106/S5981/index.html#1328449179787 (the laser is at 635 nm). After calculating the acceleration, we calculated the asd of this time series using gwpy, and after that we divided by the square of the frequency to have the displacement of the mirror. To compare these values to the ones in DARM, we divided by 4000 m to get the strain.
As Keita pointed out, the plots in the previous post were wrong, due to a missing factor of 1/(2*pi)^2 when converting from acceleration to displacement. I attach the same plots with the correct factor, and five new plots that use a window of 1 second around the time of the event instead of the window of 4 seconds that was used before
VerbalAlarms has been modified to look for new events from these new channels. Operators may get alarms with "VIRGO" at the beginning of the notification.