I performed the weekly PSL FAMIS tasks this morning.
HPO Pump Diode Current Adjust
All pump diode currents were increased by 0.1A, new and old currents summarized in the table below. The first attachment shows a 15 day minute-trend of how the DBs have decayed since the last current adjustment, while the 2nd is a screenshot of the main PSL Beckhoff screen for future reference.
| Operating Current (A) | ||
| Old | New | |
| DB1 | 48.6 | 48.7 |
| DB2 | 51.5 | 51.6 |
| DB3 | 51.5 | 51.6 |
| DB4 | 51.5 | 51.6 |
I looked at and optimized the DB operating temps as well. I changed the temps on all the diodes in DB1 from 28.0 °C to 29.0 °C; the operating temps of the other 3 DBs remained unchanged. The HPO is now outputting ~157.7 W. This completes FAMIS task 8427.
PSL Power Watchdog Reset
I reset both PSL power watchdogs at 15:35 UTC (8:35 PDT). This completes FAMIS task 3655.
TITLE: 06/20 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Cheryl (Patrick covering at beginning of shift)
SHIFT SUMMARY:
A bit of a rough shift with 0.44Hz oscillation seen with ASC PIT signals and on oplevs (primarily ITMy).
So had a few hours of CORRECTIVE MAINTENANCE (No FRS has been filed.)
LOG:
Decided to slowly take H1 up to Nominal Low Noise, while watching the ITMy oplev & ASC (watched 0.05 Hz "live" spectra, oplev blrms channels on dataviewer, & the ASC Control signals).
Miraculously, H1 eventually made it to Nominal Low Noise & the culprits of the evening did not appear. So H1 was cautiously taken to OBSERVING at 12:50 utc.
Unfortunately, at 14:00 the 0.44Hz noise returned (ITMy oplev spectra looked bad, ITMy oplev blrms increased, and the ASC control signals all showed the 0.44Hz feature). Took H1 out of OBSERVING, marked it as CORRECTIVE MAINTENANCE & tried to make adjustments to ASC CSOFT gain (went from 0.6 to 0.8 w/ a 30sec TRAMP), but H1 eventually dropped out of lock.
Attached is the ITMy Oplev Summary Page (unfortunately it doesn't show after 14:00utc yet).
As Jeff alluded to in his summary, Sheila posted an alog about Oplev damping back in April, but I stopped looking at oplevs since the offcenter ITMX & ETMX had been that way for over a week (i.e. they were not a new feature).
After looking into ASC Land, I returned to the oplevs after going through Sheila's alog and then also looking at Krishna's alog. Summary of Oplev operations over the last couple of months:
So with Oplevs not actively not controlling optics, I didn't focus on them. However, we still use them for monitoring position of the optics. And with that, I returned to checking them out after reading Krishna's alog about ITMy. I took spectra of all (4) test masses (ref from last night's lock & from when we had trouble with H1 tonight). Of all the test masses, ITMy appears to be much more noisy, and it all begins down at 0.44Hz (this is the same oscillation seen in the ASC signals previously alogged tonight).
Following Krishna's diagnosis, I looked at the BLRMS signals for the oplevs in the H1 Summary Pages & one can clearly see ITMy starting to clearly look unhealthy compared to other optics toward the end of Jeff's shift starting between 4:00 -5:00 utc (look at bands between 0.3 to 10Hz). The behavior isn't like what Krishna saw (back then ITMy reached values of 1.0+ urad for short times & in this case ITMy is just under 0.1urad, but stays there for longer stretches).
Since we don't use oplevs for damping, could the issue be noise from the light of the ITMy oplev being seen by H1? Or is there an ASC effect which is also being observed with the ITMy oplev?
H1 locked back up to 70Mpc, but within 5min ASC Pit signals started to ring up at 0.44Hz. This time H1 couldn't hobble with the ASC noise & quickly dropped out of lock on its own.
Marking as CORRECTIVE MAINTENANCE again until this can be resolved.
H1's issue can clearly be seen in:
snapshot of ASC striptools are attached.
Looking at DARM shows noisy lines at 0.44Hz (which is what I pressume is what is seen in ASC Pit signals on StripTools), 0.89Hz, & 1.34Hz in addition to the broadband noise. Spectra attached (where quiet reference is from 2utc (7pmPDT)).
Checking SDF for ASC & Related To Pitch
Looked in SDF & only found a few changes. Most related to ASC & pitch were fairly small.
Using TimeMachine To Check For Diffs
Since the striptools were showing issues with CSOFT PIT & MICH_P signals, went about following the signal in ASC Land to see where issues arise. According to the ASC Input Matrix, the PDs for CSOFT are the TR A&B PDs & one can clearly see the signal grow for these guys. And then from CSOFT you go to all four test masses.
Out Of OBSERVING & Marking As Corrective Maintenance
Since we were not getting anywhere (H1 range was barely above 50Mpc), I decided to take H1 Out of OBSERVING & tag Observatory Mode as CORRECTIVE MAINTENANCE. The thought is that starting anew will be useful here.
I tried lowering the CSOFT Pit gain. First from 0.6 -> 0.5, and then from 0.5 -> 0.2 (this immediately rung up ASC more and broke lock).
TITLE: 06/20 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 52Mpc
OUTGOING OPERATOR: Jeff
CURRENT ENVIRONMENT:
Wind: 6mph Gusts, 5mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
ASC Pit Control & Error signals are huge & this is resulting in broadband noise on DARM from 10-100Hz & obviously resulting in a low range. I will start investigating further while we are still locked. The ITMx & ETMx oplevs have been at/beyond their limits in pit & yaw, but they've been like that for atleast a week.
Shift Summary: After the IFO relocked, ran A2L check script. Pitch is a bit rung up, since we are in commissioning mode decided to run the repair script to make things better.
LLO called about a GRB alert. LHO did not receive notice from Verbal Alarms. In one hour hold.
Around 06:00 (23:00) the ASC Pitch Control and Error signals started to ring up. Could find no apparent reason. Sheila’s aLOG #35371 talks about this and OpLev damping. ITMX and ETMX OpLevs are off center. However, OpLev damping appears to be off on ITMY and ETMY. Range is suffering.
LLO called about GRB Alert. In one hour stand-down for the alert. LHO did not receive notice via Verbal Alarms.
There has not been a GRB since June 18 11:25 UTC according to GraceDb, so it looks like LLO received a false alarm.
Tagging OpsInfo for a reminder: L1500117 says to confirm that this is not a maintenance test, but maybe it should also include to confirm that is a real event. Please make sure to check GraceDb after receiving one of these alerts to make sure it is not a test and it is a current event.
Back to Observing for about 3 hours. Environmental and seismic conditions are good. Range has been in the upper 60Mpcs. All monitors are green and clear.
Today I spent some more time moving POP spot positions as well as soft again. In the end we have gone back to where we were over the weekend.
At John W.'s request, I took another measurement now that several days have passed since pumping was stopped Pumping stopped last week at 14 microns -> Today's "post-pumping" reading was 29 microns (warm day, 92F).
TITLE: 06/19 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Jeff
SHIFT SUMMARY: locked and then commissioning
LOG:
In the past week we have had two timing alarms:
| Monday 6/12 01:19 UTC [Sunday 6/11 18:19 PDT) | EY GPS |
| Saturday 6/17 23:15 - 23:24 UTC (16:15 - 16:24 PDT) | MSR Master GPS |
The first was a short (one minute) alarm from EY GPS (CNS-II). I trended all the EPICS channels on this system and only found that the Dilution of Precision (DOP) and the Receiver mode channels showed any variation at this time (number of satellites locked did not change). Does this mean it is a real or bogus error?
The second was a longer alarm (9 minutes) with the MSR master's GPS receiver (I think the internal one). The only channels in alarm were "GPS locked", the the MSR comparator channel-01 (MFO GPS). This would suggest a real problem with the Master's GPS receiver?
Time to pass this over to the timing group for further investigation (Daniel, Zsuzsanna, Stefan?)
These are error messages seem real, but are non critical. The internal GPS is only used to set the GPS date/time.
Refer back to LHO aLog 36739 to walk back through these reports. I'll attach the updated map to a comment shortly.
NOTICE--There is a location that may be quieter than ITMY STS2!!
Roam6 is located just west of of BSC4s NE SEI Pier. So this is certainly now farther from the south wall than ITMY STS2-B as well as any other previous Roam position. The attached plots show my typical plots comparing quiet time [18 June 1200 utc] to a windy time [16 June at 1800 utc.] These times are free of EQ activity.
The first plot is just the corner station wind. The windy time direction is a very steady 90 degrees or right down the Yarm to the CS.
The big news is seen on the X dof plot where the HAM5 roaming seismo is quieter than the ITMY STS2 during the windy period from about 90mHz down by maybe a factor of 2. Yes, the useism band is generally higher during the windy period.
Bottom line, keep looking while we have a machine functioning with which to look.
Here is the updated locations map.
A EL3104 analog input Beckhoff terminal was replaced from EtherCAT End Station Chassis 3. Alog 36571 reported CH3 & CH4 on terminal 7 showing an offset. Used a voltage calibrator to verify offset was from EL3104 terminal and not upstream.
Updated FRS Ticket 6024. Once a few days have past, we'll review the data to confirm success and the problem is fixed.
Trend over the past 8 days. Nominal adjusted to 21.5 dBm.
J. Kissel FRS Ticket 6024 I attach two trends of the past 12 days (three more than Daniel's plot above), and the past 365 days of these local oscillator read backs of the H1 ALS Y WFS A system. One can see from the 365 day trend that these ADCs would on drift at a rate of (the equivalent of) ~0.1 [dBm / month], so we're not going to be able to see if the problem is true fixed in these 6 days since the module was swapped. So, my vote is that we close the ticket, but continue to monitor, and re-open if we find any drift in the future.