J. Freed,
PRM shows very little coupling but there was a strong ~9.6Hz unknown noise was introduced.
Yesterday we did damping loop injections on all 6 BOSEMs on the PRM M1. This is a continuation of the work done previously for ITMX, ITMY, PR2, and PR3
As with PR3, gains of 300 and 600 were collected (300 is labled as low_noise).
The plots, code, and flagged frequencies are located at /ligo/home/joshua.freed/20241031/scrpts. While the diaggui files are at /ligo/home/joshua.freed/20241031/data. This time, 600 gain data was also saved as a reference in the diaggui files (see below), saved in 20241107_H1SUSPRM_M1_OSEMNoise_T3_LN.xml
Unknown Lockloss
This Lockloss does not have the same PSL_FSS signal signaturet that the previous locklosses tonigh have had.
Vickie and I are both thinking that this was a different type of lockloss then what we have seen earlier tonight.
Another lockloss
Looks like this lockloss may have also been an IMC/ PSL-FSS issue.
Daniel, Tony, Vicky
edit: see PSL lockloss trends for the previous lockloss, 81157. Both of this and the previous lockloss look PSL-related (ie upstream of IMC- which does see power fluctuations earlier than suggested by the IMC_TRANS channel). Might be worth trying whether the IMC can stay locked with the ISS_ON vs. ISS_OFF tests again, now that the PSL laser is not obviously mode hopping.
This lockloss maybe looks strange: see 20-ms trends, 1-second trends, 5-second trends. It looks like a very fast lockloss (< 1ms) coincident with various FSS/ISS/TPD glitches - is this different than what we saw before? Here, AS port loses light within like < 1 ms (as witnessed on Keita's new lockloss monitor, AS_A, and LSC-REFL which sees some corresponding power increase). Lockloss looks within 1ms of changes in PSL-FSS TPD, FAST_MON, PC_MON and ISS AOM , SECONDLOOP, etc.
Weirdly IMC-TRANS_IN1_DQ (fast channel 16k) does not see power change until 5 ms later, which I don't understand? I think like DARM loses lock (which should be why AS port and LSC-REFL both change inversely, right?), while the power on IMC-TRANS doesn't change, despite the PSL FSS and ISS loops all see glitches.
Daniel suggested maybe there could be some more analog filtering on IMC-TRANS slows down this channel (even though it is recorded at 16k?) - we're not sure why there is such a delay, and whether this IMC-TRANS channel a super reliable timing metric for what is happening. The ~5ms seems too fast for the storage time given IMC's ~8.8 kHz cavity pole (1/(2*pi*8.8kHz) ~ 18 microseconds).
Daniel helped add some fast PSL-ISS_SECONDLOOP channels to the Sheila's scopes, and I've added Keita's lockloss monitor channel too (81080, H1:PEM-CS_ASC_5_19_2K_OUT_DQ ), then saved this scope at /ligo/home/victoriaa.xu/ndscope/PSL/PSL_lockloss_search_fast_iss.yaml
TITLE: 11/08 Eve Shift: 0030-0600 UTC (1430-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.43 μm/s
QUICK SUMMARY:
H1 has been locked for 1 hour.
All systems running well.
Lockloss potentially Caused by PSL issues
The first Channels that show motion is the H1:PSL-FSS_FAST_MON_OUT_DQ
Relocking now, Currently @ Engage soft loops.
Trends on this lockloss are coincident with PSL FSS / ISS / TPD glitches - so I think the PSL tag is appropriate here.
We added MC2_TRANS to the scope (top right subplot), and it shows power changes earlier than IMC_TRANS_{IN1_OUT}_DQ channels.
I think this means that IMC power changes do happen within 1 ms of FSS glitches starting, which wasn't clear from the IMC_TRANS channel we've been using (where the 16k IN1_DQ and 2k OUT_DQ channels both showed >ms delays of IMC power changing).
FAMIS 26448 : Trend the BRSX & BRSY Drift
The minute trends of the driftmon for BRSX hav 2 spikes but both of those are from Jim adjusting the BRSX on Sep 19th and Sep 24th.
TITLE: 11/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Putting my alog in early since I'm leaving early. Currently observing at 160Mpc and have been Locked for 50 minutes. Secondary useism is high-ish, but we just went back to CALM from USEISM. The relock attempt from when I came in this morning wasn't too bad, and then relocking after the lockloss was also not bad besides SRM tripping during SRC align in IA and not wanting to lock SRC.
LOG:
15:30 In PREP_FOR_LOCKING
- stuck?
15:42 DOWN and then started relocking
16:28 Lockloss from MAX_POWER
16:43 Lockloss from ACQUIRE_DRMI_1F
16:45 Started an initial alignment
17:04 Initial alignment done
17:55 NOMINAL_LOW_NOISE
17:59 Observing
20:17 Lockloss due to earthquake
20:25 Started an initial alignment
- SRM WD tripped during SRC aligning
- I raised the WD trip level for SRM M3 to 200 from 150 since we get this tripping quite a bit
- Tried SR2 align and AS centering again, didn''t work
- Left IA
20:56 Left IA, relocking
21:46 NOMINAL_LOW_NOISE
22:05 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
17:33 | PEM | Robert | CER | n | Moving cable for grounding study | 17:39 |
17:37 | FAC | Kim | H2 | n | Tech clean | 17:54 |
20:38 | PEM | Robert | LVEA | Y | Setting up scattering measurement | 20:58 |
Lockloss @ 11/08 20:17UTC after nearly 2.5 hours locked due to a local earthquake
22:05 Observing
[Tony, Erik]
CDS Laptops have been locking up when the lid was closed then reopened.
There is a fix, but it has to be applied by individual users.
From the "Applications" menu (upper left) go to "Settings" -> "Power Management" -> "System".
Screen shot attached.
With "On Battery" selected, change the Laptop Lid action action to "Suspend", then change the System sleep mode to "Suspend".
Select "Plugged In" and make the same changes.
The laptop will now go into suspend mode when the lid is shut. When you reopen it, the screen will be blank for a bit, but if you press the space bar, it will wake up after several seconds.
Also, you may now use your personal account when logging in to the laptop. The controls user can still be used. The settings described above are per-user. If you make the changes as controls, that won't affect your personal account.
If you use the controls user, you may want to check these settings each time you logged in since somebody else may have changed them.
Richard, Oli
We had a Check PSL Chiller verbal alarm almost an hour ago, and Richard heard it beeping in the diode room so he added 120mL of water to the PSL chiller.
Fri Nov 08 10:01:53 2024 INFO: Fill completed in 1min 52secs
Very short fill. Travis saw a vapor cloud to confirm the fill.
Camilla, Oli
Recently, because of the PSL/IMC issues, we've been having a lot of times where the IFO (according to verbals) seemingly goes into READY and then immediately goes to DOWN because the IMC is not locked. Camilla and I checked this out today and it turns out that these locklosses are actually from LOCKING_ARMS_GREEN - the checker in READY that is supposed to make sure the IMC is locked was actually written as nodes['IMC_LOCK'] == 'LOCKED' (line 947), which just checks that the requested state for IMC_LOCK is LOCKED, and it doesn't actually make sure the IMC is locked. So READY will return True, it will continue to LOCKING_ARMS_GREEN, and immediately lose lock because LOCKING_ARMS_GREEN actually makes sure the IMC is locked. This all happens so fast that verbals doesn't have time to announce LOCKING_ARMS_GREEN before we are taken to DOWN.
To (hopefully) solve this problem, we changed nodes['IMC_LOCK'] == 'LOCKED' to be nodes['IMC_LOCK'].arrived and nodes['IMC_LOCK'].done, and this should make sure that we stay in READY until the IMC is fully locked. ISC_LOCK has been reloaded with these changes.
The reason it has been doing this is because there is a return True in the in main method of ISC_LOCK's READY state. Then a state returns True in its main method, it will skip the run method.
I've loaded the removal of this in ISC_LOCK.
TITLE: 11/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
Currently relocking and we had been in PREP_FOR_LOCKING for 15 minutes now so I took us to DOWN and then restarted locking.
Last night's lockloss (11/08 11:15UTC)
Lockloss tool gave it the IMC tag, and I do see that ASC-AS_A did drop down and the IMC lost lock less than 10ms later(ndscope), but I don't see anything that could have caused this to happen.
17:59UTC Back to Observing
Summary:
Attached shows a lockloss at around 11:31 PST today (tGPS~1415043099). It seems that the fast shutter, after it was shut, bounced down to momentarily unblock the AS beam at around the time the power peaked.
For this specific lock loss, the energy deposited into HAM6 was about 17J and the energy that got passed the fast shutter is estimated to be ~12J because of the bouncing.
Bouncing motion was known to exist for some time (e.g. alogs 79104 and 79397, the latter has an in-air slow-mo video showing the bouncing), it seems as if the self damping is not working. Could this be an electronics issue or mechanical adjustment issue or something else?
Also, if we ever open HAM6 again (before this fast shutter is decomissioned), it might be a good idea to make the shutter unit higher (shim?) so the beam is still blocked when the mirror reaches its lowest position while bouncing up and down.
Details:
The top panel shows the newly installed lockloss power monitor (blue) and ASC-AS_A_DC_NSUM which monitors the power downstream of the fast shutter (orange).
The shutter was triggered when the power was ~3W or so at around t~0.36 and the ASC-AS_C level drops by a factor of ~1E4 immediately (FS mirror spec is T<1000ppm, seems like it's ~100ppm in reality).
However, 50ms later at t~0.41 or so, the shutter bounced back down and stayed open for about 15ms. Unfortunately this roughly coincided with the time when the power coming into HAM6 reached its maximum of ~760W.
Green is a rough projection of the power that went to OMC (aka "AS_A_DC_NSUM would have looked like this if it didn't rail" trace). This was made by simply multiplying the power mointor itself with AS_A_DC_NSUM>0.1 (1 if true, 0 if false), ignoring the 2nd and 3rd and 4th bouncing.
All in all, for this specific lock loss, the energy coming to HAM6 was 16~17J, and the energy that got past FS was about 11~12J because the timing of the bounce VS the power. OMC seems to be protected by the PZT though, see the 2nd attachemt with wider time range,
The time scale of the lock loss spike itself doesn't seem that different from the L1 lock loss in LLO alog 73514 where the power coming to HAM6 peaked tens of ms after AS_A/B/C power appreciably increased.
OMC DCPDs might be OK since they didn't record crazy high current (though I have to say the IN1 should have been constantly railing once we started losing lock, which makes the assessment difficult), and since we've been running with bouncy FS and the DCPDs have been good so far. Nevertheless we need to study this more.
Two lock losses, one from last night (1415099762, 2024-11-08 11:15:44 UTC, 03:15:44 PST) and another one that just happened (1415132263, 2024/11/08 20:17:25 UTC) look OK.
The shutter bounced ~50ms after the trigger but the power went down before that.
Two more lock losses from today (1415141124, 2024-11-08 22:45:06 UTC and 1415145139, 2024-11-08 23:52:00 UTC) look OK.
In these plot, shutter open/close is judged by p(monitor)/p(AS_A) < some_threshold (open if true).
WP 12139
Entry for work done on 11/5/2024
Two 3T seismometers were installed in the LVEA Biergarten next to the PEM area. Signals are routed through the SUS-R3 PEM patch panel into the CER. Signals are connected to PEM AA chassis 4 and 5.
F. Clara, J. Warner
There are 2 of these plugged in, they are 3 axis seismometers, serial numbers T3611670 and T3611672. The first one is plugged into ports 4,5 & 6 on the PEM patch panel, the second is plugged into ports 7,8 & 9. In the CER, T3611670 is plugged into ports 21,22 & 23 on PEM ADC5 and T3611672 is plugged into ports 27,28 & 29 on PEM ADC4. In the DAQ, these channels are H1:PEM-CS_ADC_5_{20,21,22}_2K_OUT_DQ and H1:PEM-CS_ADC_4_{26,27,28}_2K_OUT_DQ. So far the seismometers look like they are giving pretty good data, similar to the STS and the old PEM Guralp in the biergarten. The seismometers are oriented so that the "north" marking on the their carry handles is pointed down the X-arm, as best as I could with eyeballing it.
I need to figure out the calibrations, but it looks like there is almost exactly -15db difference between these new sensors and the old PEM Guralp, but maybe the signal chain isn't exatly the same.
Attached images compare the 3T's to the ITMY STS and the existing PEM Guralp in the biergarten. First image compares asds for each seismometer. Shapes are pretty similar below 40 hz, but above that they all have very different responses. I don't know what the PEM guralp is calibrated to, if anything, it looks ~10x lower than the STS (which calibrated to nm/s). The 3T's are about 5x lower than the PEM sensor, so ~50x lower than the STS.
Second image are tfs for the X,Y & Z dofs between the 3T's and the STS. These are just passive tfs between the STS and 3T's to see if the have similar response to ground motion These are generally pretty flat between .1 and 10hz. The X & Y dofs seem pretty consistent, the Z tfs are different starting around 10hz. I should go and check that the feet are locked and have similar extension.
Third image are tfs between the 3T's and the exist PEM Guralp. Pretty similar to the tfs with the STS, horizontal dofs all look very similar, flat between .1 and 10hz, but the ADC4 sensor has a different vertical response.
I'll look at noise floors next.
The noise for these seems almost comparable to T240s, above 100 mhz, less certain about noise below 100mhz, these don't have thermal enclosures like the other ground sensors. Using mccs2 in matlab to remove all the coherent noise with the STS and PEM Guralp, the residual noise is pretty close to the T240 spec noise in SEI_sensor_noise. Attached plots are the asds and residuals after finding a scale factor that matches the 3T asds to the calibrated ITMY STS asds. Solid lines are the 3T asd, dashed lines are the residuals after coherent subtraction.
For convenience I've attachd the response of the T240 and the STS-2 from the manuals.
These instruments both have a steep fall-off above 50-60 Hz.
This is not compensated in the realtime filters, as it would just add lots of noise at high frequency, and then we'd have to roll it off again so it doesn't add lots of nasty noise.
T240 user guide - pg 45
https://dcc.ligo.org/LIGO-E1500379
The T240 response is pretty flat up to 10 Hz, has a peak at ~ 50 Hz, then falls off rapidly.
STS-2 manual - pg 7
https://dcc.ligo.org/LIGO-E2300142
Likewise the STS-2 response is pretty flat up to 10 Hz, then there is ripple, and a steep falloff above 60 Hz