Displaying reports 13981-14000 of 86301.Go to page Start 696 697 698 699 700 701 702 703 704 End
Reports until 10:32, Thursday 30 November 2023
H1 ISC (SUS)
camilla.compton@LIGO.ORG - posted 10:32, Thursday 30 November 2023 (74494)
Checking SUS OSEMs before and after start of DARM Range Fuzzy period 2020/11/23 7:00UTC

As suggested by Richard/Robert to look for electronics issues, I trended the SUS OSEMs before and after start of DARM Range Fuzzy period 2020/11/23 7:00UTC, between 6:00UTC and 8:00UTC and zoomed into [s-trends]. Previous troubleshooting in 74455, 74377, 74383.

Trended ZMs, IMs, RMs, OMC, OFI, OM1,2,3; ETMs, ITMs, @ M0,L2, L3; BS @ M1, L2, L3; MC1,2,3, SMs, PRs, FCs @ M1, M2, M3

I saw nothing change at the 2020/11/23 7:00UTC time. Only thing of note is that OFI, OM2, and ZM4,5,6 has louder periods every ~15 seconds, plots attached of ZMs and OM2. This is not a new feature.

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 10:29, Thursday 30 November 2023 (74497)
Gauge for Ion Pump X2-8 Not Reporting

The pressure gauge for X2-8 stopped reporting pressure about 8:00 PM last night, this is a known issue due to the lack of sunlight needed to charge batteries via solar panels. No action is needed at this time.

Images attached to this report
LHO General
eric.otterman@LIGO.ORG - posted 10:17, Thursday 30 November 2023 (74496)
Mitsubishi unit low ambient hood calibrating
Several of the low ambient hoods located on the condensing units of the Mitsubishi cooling systems for the server rooms have ambient air sensors which are reading incorrectly. This causes the hoods to keep the dampers open too much which results in lower head pressure which in turn leads to inefficient cooling. I measured the ohm value on these sensors and compared them with the correct ohm value correlating to the correct outdoor air temp and then added resistors to increase the ohms to correlate with the correct air temp. I will replace these sensors once we get a hold of new ones. 
LHO VE
david.barker@LIGO.ORG - posted 10:12, Thursday 30 November 2023 (74495)
Thu CP1 Fill

Thu Nov 30 10:04:16 2023 INFO: Fill completed in 4min 13secs

Travis confirmed a good fill curbside. Even with the new trip temps of -80C this one barely sqeaked by, with TC A,B mins of -83.0C and -77.6C respectively.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 08:26, Thursday 30 November 2023 - last comment - 10:43, Thursday 30 November 2023(74489)
OPS Day Shift Start

TITLE: 11/30 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 2mph 5min avg
    Primary useism: 0.10 μm/s
    Secondary useism: 0.94 μm/s
QUICK SUMMARY:

IFO has been trying to lock DRMI (and failing) for about 3 hours (due to microseism).

PSL DUST 102 still acting up per previous alogs (74421).

Attempting to continue locking.

Comments related to this report
camilla.compton@LIGO.ORG - 08:49, Thursday 30 November 2023 (74490)

Before/after the lockloss the microseism ground movement increased. During last nights lock, as Ryan said in  74487 the low frequency range BLRMs were again elevated (compare to alog 74473). The BLRMs were especially high before 09:45UTC (t = -45m on attached plot) this doesn't seem correlated with the ground movement increase.

Images attached to this comment
ryan.crouch@LIGO.ORG - 09:08, Thursday 30 November 2023 (74492)OpsInfo, SEI

The windy.com forecast does not bode well for the microseism this weekend. The wind speed, wave height, and swells are all increasing this weekend peaking on Monday on both coasts, the Pacific increasing more dramatically (by almost a factor of 2 at maximum compared to early today) than the Atlantic.

camilla.compton@LIGO.ORG - 10:43, Thursday 30 November 2023 (74498)

I check the LSC MICH, PRCL, SRCL loops (from nuc25) comparing last nights lock 2023/11/30 7:55UTC with increase low frequency noise to the quiet DARM time 2023/11/23 6:45UTC and the fuzzy darm 7:15UTC and LSC loops looked the same at these three times.

Images attached to this comment
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 06:58, Thursday 30 November 2023 (74488)
Lockloss at 12:57UTC

H1 lock losses (caltech.edu)

We haven't been able to relock following this LL, Yarm is very unstable and keeps losing it. Lots of low state LLs, FIND_IR, DRMI(mostly), PRMI, GREEN_ARMS

H1 called for assistance at 14:39UTC, From the DRMI timer expiring after an initial alignment had already been run. The DRMI locking issues are likely from the elevated microseism.

LHO General
ryan.short@LIGO.ORG - posted 00:01, Thursday 30 November 2023 (74486)
Ops Eve Shift Summary

TITLE: 11/30 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Struggled early in the shift to get DRMI to lock, but once successful, everything else ran smoothly.

H1 has now been locked and observing for 5 hours. If we lose lock, it may be challenging to relock with the elevated microseism.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
00:11 PEM Robert LVEA, MER - Retrieve equipment, adjust fans 00:15
00:15 TCS Corey MER - TCS chillers 01:15
LHO General
ryan.short@LIGO.ORG - posted 20:15, Wednesday 29 November 2023 (74487)
Ops Eve Mid Shift Report

H1 is back to observing at 140Mpc as of 03:10 UTC.

The biggest struggle with lock acquisition following the challenges of the day came at the DRMI locking steps. The POP18 and POP90 flashes have been consistently low recently, so ISC_LOCK went through PRMI to start, which was successful, but PRMI ASC wan't able to make the signal much better. When moving on to try and lock DRMI, I paused ISC_LOCK to give myself time to try a few things. After a while of adjusting triggering thresholds and LSC servo gains, the settings that finally worked to keep DRMI locked enough for me to adjust alignment and engage DRMI ASC were:

After DRMI ASC was engaged and offloaded, I continued locking by resuming ISC_LOCK. I also set the LSC servo gains back to their nominal value of 1.0. The rest of lock acquisition went smoothly and unaided.

That said, BNS range is still on the low side and Camilla pointed out that the low frequency noise she noted in alog 74473 is back. Also, the secondary microseism is well above the 95th percentile and still rising, so things are unstable.

H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 16:58, Wednesday 29 November 2023 (74485)
SQZ_ANG_ADJUST Guardian now controls SQZ Angle Servo, SQZ_MANAGER nominal back to FREQ_DEP_SQZ

Naoki, Sheila, Camilla

Sheila/Dave created a new SQZ_ANG_ADJUST guardian yetserday 74454 and today Naoki and I moved the ADJUST_SQZ_ANG_ADF and ADJUST_SQZ_ANG_DITHER states that were added to SQZ_MANAGER in 74428 to it, graph attached.

Now SQZ_MANAGER's nominal is set back to FREQ_DEP_SQZ and within this state, SQZ_ANG_ADJUST is requested to it's nominal ADJUST_SQZ_ANG_ADF as long as sqzparams.use_sqz_angle_adjust = True.

Nothing should be different in the working of the system apart from the nominal guardian states, tagging OpsInfo.

We had to restart SQZ_MANAGER as we deleted some of it's states, I tried to request SQZ_READY_IFO before reopening the medm and actually requested ALIGN_HD which offloaded ZC5,6 ASC and took the ZMs, FCs and PSAMs to Homodyne settings. After reverting these and SQZ FC green locked so alignment should be fine.

Images attached to this report
H1 ISC
dana.jones@LIGO.ORG - posted 16:37, Wednesday 29 November 2023 - last comment - 11:51, Friday 08 December 2023(74481)
DCPD matrix changes in time

Dana, Sheila

I plotted how the DCPD matrix has changed over the last eight years in the attached figure (DCPD_sums_and_nulls_8years). I've also included zoomed in snapshots of all the major changes within this time span with the dates and times shown (changes 1-7 attached below). For each one of these jumps in the matrix values, I checked the alogs for that day and attached any potentially relevant ones explaining what was going on at the time below:

change 1 (1) (2)

changes 2 & 3 (1) (2)

change 4 (1)

change 5 (1) (2)

change 6 (1)

change 7 (1) (2) (3) (4) (5)

Images attached to this report
Comments related to this report
dana.jones@LIGO.ORG - 11:51, Friday 08 December 2023 (74683)

Jeff (alog 29856) and Jenne (alog 30668) both made measurements of the imbalance between A and B of the OMC DCPDs, i.e., the ratio of B to A, back in 2016 after the OMC was replaced. I measured this ratio just now to see if this value has changed greatly since then. I targeted the XPCal line at 102.13 Hz at GPS time = 1386096618 s and measured a value for B/A = 0.963 +/- 0.001. Jeff found B/A = 0.958 and Jenne found B/A = 0.969, so it seems this imbalance has not changed much since the OMC was replaced. See attached figure for transfer function.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:08, Wednesday 29 November 2023 (74484)
Ops Eve Shift Start

TITLE: 11/29 Eve Shift: 00:00-08:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 0mph Gusts, 0mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.72 μm/s
QUICK SUMMARY:

H1 is running an initial alignment after hopefully having fixed the locking issues from earlier today. See alogs 74472, 74478, 74480, and 74483 for a rundown of efforts made thus far.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:02, Wednesday 29 November 2023 (74483)
OPS Day Shift Summary

TITLE: 11/29 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

21:41 UTC Jim is fixing some noisy CPS’s in IX (discovered during weekly check).

22:00 UTC to Now

BS Oplev glitch: Sheila found a BS OpLev glitch that has the same form as the one that Jenne saw in the PRMI locking ASC signal (prior to the gain reduction).  We checked if this glitch is present during DOWN (this morning) to see if this is inherent glitching or not. We found it at that same time, albeit differently shaped. Now we are thinking that this may be caused by OpLev damping, presenting its glitch in the signal. Interestingly, this OpLev damping isn’t actually on when PRMI ASC engages, which is where earlier problems were. Therefore, the plan is to go through initial alignment, since we restored the alignment back to a different time, and then wait for the glitch. Once seen, we turn off the OpLev damping to see if it persists. This would bring us closer to knowing what the story is behind the glitch. This glitch would not be the main cause of locking issues. Doing another final (for my shift at least) initial alignment as of 23:45 UTC. Details and screenshots in Sheila's alog 74480.

LOG:                                                                                                                                                                                                                                                                                     

Start Time System Name Location Lazer_Haz Task Time End
17:25 PEM Robert EY N Picking up equipment in prep for comissioning 17:25
17:26 FAC Karen Vac prep and Optics Lab N Technical cleaning 17:26
17:26 FAC Eric CER M Checking thermostat/temp 17:26
18:30 VAC Gerardo and Jordan FCTE N Pulling cardboard out 19:15
18:31 FAC Karen Woodshop N Technical cleaning 19:31
18:42 SEI Jim LVEA N Ham 1 watchdog trip investigation 19:12
18:43 PEM Robert LVEA N Setting up for Wed comissioning 19:43
21:25 SQZ Camilla LVEA Y SQZ Work 22:25
21:26 PEM Robert LVEA N Shakers 22:26
21:26 DetChar Daniel Optics Lab N Unstated 22:26
H1 AOS
sheila.dwyer@LIGO.ORG - posted 15:13, Wednesday 29 November 2023 (74480)
BS optical lever glitching, side problem to today's locking issues

In trying to diagnose locking issues today, we've been sidetracked for a while because of glitches from the BS optical lever, which was causing a problem for PRMI locking. 

The first attached screenshot shows a glitch similar to the ones that have been bothering us in PRMI, which shows up in the optical lever, and also the top mass osems.  This first screenshot is with optical lever damping on. The second screenshot shows a glitch in the optical lever signal with the damping off, which does not show up in the top mass osems. 

 

Images attached to this report
H1 SQZ
naoki.aritomi@LIGO.ORG - posted 15:03, Wednesday 29 November 2023 (74479)
Pump AOM and fiber alignment on 20231129

Camilla, Naoki

Since the pump ISS control monitor was close to 1 and the ISS was unstable, we aligned the pump AOM and fiber following 72081.

First we requested the SQZ MANAGER guardian to DOWN and checked the AOM throughput with 0V ISS drivepoint. The AOM throughput was only 21.3 mW/36.9 mW = 58%. After we aligned the AOM, the AOM throughput is 34.1 mW/36.9 mW = 92%.

Then we set the ISS drivepoint at 5V and aligned AOM by maximing the +1st order beam. After the alignment, the 1st order beam is 11 mW and the 0th order beam is 21.1 mW.

After we aligned the pump fiber, H1:SQZ-OPO_REFL_DC_POWER improved from 1.6 V to 2.97 V.

Finally, we requested the OPO guardian to LOCKED_CLF_DUAL and confirmed that the ISS can be locked with 80uW OPO trans. The ISS control monitor is 6.8, the SHG output power is 50.3 mW and the pump going to fiber is 17.9 mW.

H1 ISC (CAL)
dana.jones@LIGO.ORG - posted 14:48, Wednesday 29 November 2023 (74477)
Calibrating MICH to DARM

Dana, Louis, Jenne

This is a step-by-step guide for how to produce a variety of figures using diaggui (for my future reference and/or other people who are learning this system). I've also included all the relevant information to reproduce the attached figure, where MICH has been calibrated to DARM (both in units of m) so that we can compare how they were affected by an excitation that was injected on June 22nd 2023.

Open a terminal and type diaggui. Make sure nothing is active in the Excitation tab. Then go to the Measurement tab and enter the two channels we want to compare, in this case they are H1:CAL-CS_MICH_DQ and H1:CAL-DELTAL_EXTERNAL_DQ. Make sure both are checked off.

Next set the frequency range to something large, say 0 Hz to 7000 Hz, and change the binwidth (BW) to 0.1 Hz. Also set number of A channels to 2 because we aren't sure yet which channel we want to be our A channel (i.e., which one we want in the denominator of the transfer function; remember if you give it one A channel it will use whichever channel you have in the top, 0th slot, and it counts down from there if you give it more than one A channel).

Then choose a target date, here we used 22/06/2023 at 17:31:26 UTC. Because this is more than a couple months in the past, the data will be stored on tapes so we also need to change the data source. Under the Input tab, switch from Online system to NDS2 and under NDS2 Selection click the dropdown next to Server and choose nds.ligo-wa.caltech.edu. Then go back to the Measurement tab and click start! (If you forget to switch from online to NDS2, the run will stall and never finish but it also won't give you an error message...)

Once it's finished running, go to the Result tab. There will probably be two graphs that are produced, one showing the magnitude in frequency for each channel and one showing the coherence between the two channels. To change to and from log scale and to set axis limits use the Range tab. The position of the legend can also be changed in the Legend tab, and vertical or horizontal cursors can be added using the Cursor tab. To add more panels use the Options tab.

Both power spectra have been whitened, so to undo that we use the Calibration tab. Choose which channel to calibrate, then click New. Reference can be whatever, add proper units (meters in this case), and choose a date that is ideally a bit before the date we've plotted data from. Click Okay and make sure default is checked off. Then for the transfer function information, we used the Pole/Zeros tab for MICH and imported transfer function information for DARM using the Trans. Func. tab. Starting with MICH: Go to sitemap, click on CAL CS, then LSC calib. Click on the transfer function button stemming from the sum of ERR and CTRL; this is the transfer function that did the whitening. To get the poles and zeros, right click somewhere in the background of the window that pops up, then choose Execute and Foton. Then click on the rectangle above the filter (in this case FM2) that was used. DO NOT CLICK ON FM2! This brings up another window which shows the zeros and poles, i.e., zpk, where the first array is the zeros and the second array is the poles. So to undo the whitening, go back to the calibration tab and invert this function, so check off Pole-zero, then type the zeros in the poles box and the poles in the zeros box, each separated by a comma and a space. MICH is measured in um instead of m, so also apply a gain of 1e-6 to convert it to m. Then click Set and Okay. This should have reversed the whitening. For DARM: It is generally the same process, but instead we use the Trans. Func. tab, check off Transfer function, then click Edit and File, Open. Upload a .txt file with the transfer function information, which we grabbed using this command in the terminal:

pydarm export -r 20230621T211522Z --deltal_external_calib_dtt

which returned this:

>>> /var/opt/conda/base/envs/cds/lib/python3.9/site-packages/pandas/core/computation/expressions.py:21: UserWarning: Pandas requires version '2.8.0' or newer of 'numexpr' (version '2.7.3' currently installed).
>>>  from pandas.core.computation.check import NUMEXPR_INSTALLED
>>> INFO | searching for '20230621T211522Z' report...
>>> INFO | found report: 20230621T211522Z
>>> INFO | using model from report: /ligo/groups/cal/H1/reports/20230621T211522Z/pydarm_H1.ini

and created the file /ligo/home/dana.jones/Documents/cal_MICH_to_DARM/deltal_external_calib_dtt.txt.

Now both curves should be calibrated properly. Make sure to save continuously! This should be saved as a .xml file, so later you can reopen it by typing in the terminal diaggui .xml. The file we created is /ligo/home/dana.jones/Documents/cal_MICH_to_DARM/cal_mich_to_darm_11_29_23.xml.

To add a new graph click on the right-pointing arrow next to an empty cell and under the Traces tab use the dropdown next to Graph to tell it what style of graph you want. Check off the Active box and tell it which channel you want, where the convention is B/A. To make the phase plot, go to the Units tab and in the Y dropdown click on Phase (degree).

Remember to save one more time when finished. Compare the resulting figure (attached below) to Fig. 3.10 from Jenne's thesis. To review/summarize, the top left panel is showing MICH and DARM during an excitation on June 22nd, the bottom left panel is showing the coherence between MICH and DARM, and the two right panels are showing the magnitude and phase of the transfer function from MICH to DARM.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 13:28, Wednesday 29 November 2023 - last comment - 15:40, Wednesday 29 November 2023(74478)
OPS Day Midshift Update

Midshift Update: Still troubleshooting (but finding issues and chasing leads)

Here are the main stories, troubleshooting ideas and control room activities so far...

 

TIME: 18:08 UTC - 19:00 UTC

Codename: Ham 1 Watchdog Trip

Ham 1 Watchdog tripped due to Robert hitting the damping since it started “audibly singing”. We untripped each other. At 18:41 UTC, Jim suggested going down to Ham 1 to investigate the watchdog trip and the singing. See Jenne’s alog 74476 for more. This is not the cause of our locking issues.

 

TIME: 18:09 UTC - 18:33 UTC

Codename: No Name Code

There was an ALS_DIFF naming error in a piece of code that TJ pushed in this morning. Jenne found it, we edited the code, reloaded (via stop and exec) the ALS_DIFF guardian and it was fine. Initially thought to be as a part of the watchdog trip but was not. This is not the cause of our locking issues.

 

TIME: 17:30 (ish) UTC - 21:30 UTC

Codename: Fuzzy Temp

The temperature excursion that was caught and apparently fixed yesterday has not been fixed. While the temperatures have definitely returned to “nominal” and are within tolerance, the SUP temperature monitor is reporting extremely noisy temperatures that plateaued at a higher level (1.5 degrees) than it was prior to the excursion. In addition, the readings are extremely fuzzy. I went down with Eric into the CER and we confirmed that both the thermostats are correct in their readings, eliminating that potential cause. There are a few reasons this may be happening per Fil and Robert’s investigation.

  1. It could be the temperature sensor itself causing excess noise and potentially contributing to its own maybe erroneous report of temperature.
  2. It could be a faithful reading of another nearby source moving up and down in temperature quickly, causing this fluctuation (the noisiness oscillates at 15-min periods generally). This would mean it is a controls and/or external CER machine issue and there’s nothing wrong with the temp readout itself.

The proposed plan for this was to switch the cables between the more stable CER temp readout in the same room and the fuzzy SUP readout to determine if this issue was upstream (Beckhoff error) or downstream (temperature sensor). The cables were switched and upon trending the channels, we found that the noisy/fuzzy SUP readout (now plugged into CER channel) became stable and vice versa. This meant that the noise was related to the equipment being plugged in i.e. the temperature sensor and/or its cable.

Fil switched the sensor out but the fluctuation did not change. Robert had the idea that it could be a nearby air conditioner (AC5) that was turning on and off and thus causing the temperature fluctuations. He turned the AC off 20:30 UTC and and we waited to see the temperature response. We found that indeed, the AC was the cause of the fluctuation (Screenshot 4).

This tells us that the AC behavior changed during yesterday’s maintenance, causing it to be more noisy. This noisiness was only perceived after the temperature excursion, and only appeared to be changed after the excursion was fixed.

Unfortunately, this would means that the issue is contained to faulty equipment rather than faulty controls, which means this is not the cause of our locking issues.

See screenshots (1 → 4) to get an idea of the overall pre-switch noise and the post-switch confirmation.

TIME: 16:30 UTC - Ongoing

Codename: Mitosis

There is a perceived “cell splitting” jittering in the AS AIR camera during PRMI’s engage ASC loop that takes place after PRMI is locked. This jittering, given enough time, causes swift locklosses in this state, and definitely worse with presence of ASC actuation.

Jenne found no issues or glitches in the PRC optics (lower and higher stages) (Screenshot 5). Jenne did find a 1.18Hz ring up when PRMI is locked, and when that gets bad there's the glitches in POP18. Jenne found that the glitching seems to go away, and that the 1.18 Hz ringing went away when she lowered the LSC MICH locking gain from nominal 3.2 down to 2.5. (Screenshot 6).

Coil drivers: Checked during troubleshooting to see if these might have caused/exacerbated lock issues - confirmed by Rahul not to be the case.

An idea so far is that the SUS-PRM-M3 stage may be glitching, but we need to see if this glitch persists without the feedback that a locked PRMI would have. Confirmed not to be glitching. Sheila just checked the same thing for the BS and the ITMs. We are left with less of an idea of what’s going on now. The jittering in the AS AIR camera is, however, fixable this way. This was not changed in the guardian.

So this “Mitosis” issue is somewhat resolved (or at least bandaged as we investigate more).

Ideas of leads to chase are:

 

Stay tuned.

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 15:40, Wednesday 29 November 2023 (74482)

It seems that the "mitosis" is BS optical lever glitching, which shouldn't prevent us from locking if we can get past DRMI. (and wouldn't be responsible for high noise and locklosses overnight).

74480

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 18:02, Tuesday 28 November 2023 - last comment - 09:02, Thursday 30 November 2023(74460)
Lockloss @ 01:39 UTC

Lockloss @ 01:39 UTC - no obvious cause, online lockloss analysis failed.

Looks like PRCL saw the first motion by a very small margin.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 19:02, Tuesday 28 November 2023 (74461)

H1 back to observing as of 03:01 UTC.

oli.patane@LIGO.ORG - 09:02, Thursday 30 November 2023 (74491)
Images attached to this comment
Displaying reports 13981-14000 of 86301.Go to page Start 696 697 698 699 700 701 702 703 704 End