Displaying reports 12001-12020 of 86581.Go to page Start 597 598 599 600 601 602 603 604 605 End
Reports until 10:33, Saturday 30 March 2024
H1 ISC
elenna.capote@LIGO.ORG - posted 10:33, Saturday 30 March 2024 (76811)
PRCL Injection test with L2A

Based on the PRCL injections I performed yesterday, alog 76805, I decided to try making adjustments of the PRM L2A gains to see if I could make changes to the PRCL/CHARD Y coupling. For the test, I adjusted the gain in the PRM M3 L2Y drivealign bank. I used a ramp time of 30 seconds and started small to avoid locklosses.

Generally, a positive L2Y gain made the PRCL/DARM coupling worse above 10 Hz. I tried gains of 0.01, 0.03, and 0.1. I saw the PRCL/DARM transfer function increase above 15 Hz at gains of 0.03 and 0.1 (no visible change at 0.01). There was no appreciable change in the PRCL/CHARD Y transfer function, but the coherence decreased as the gain increased. To confirm I wasn't being fooled by thermalization (we were locked for 4.5 hours during these tests), I went back to zero gain to confirm the transfer functions were still the same.

I then tried the negative direction. Gains of -0.03 and -0.1 reduced the PRCL/DARM transfer function above 15 Hz, but with the -0.1 gain I noticed an increase in the coupling below 10 Hz. Again, little change in the PRCL/CHARD transfer function, but a reduction in coherence with increasing negative gain. While I was thinking about what this means, there was a lockloss from ETM saturations. It appears the changing L2A gain caused the lockloss. There was a ring up in the DARM control signal at about 1 Hz. The increased coupling at low frequency is probably to blame.

A comparison of CHARD/PRCL and DARM/PRCL at 0, 0.1 and -0.1 PRM L2Y gain is shown here.

Also to note, during these tests I saw no change in the PRCL/MICH and PRCL/SRCL coupling.

Given that the PRCL/CHARD coupling is not changing appreciably, but the PRCL/DARM coupling is changing is probably a sign that we are incorrectly compensating for some other problem.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:12, Saturday 30 March 2024 (76813)
Sat CP1 Fill

Sat Mar 30 10:09:46 2024 INFO: Fill completed in 9min 42secs

Images attached to this report
H1 General (CDS, OpsInfo)
anthony.sanchez@LIGO.ORG - posted 09:47, Saturday 30 March 2024 - last comment - 09:18, Sunday 31 March 2024(76812)
Update on teamspeak status.

The Server we usually use for teamspeak is: 
teamspeak.ligo.org ,  which I believe is a server hosted offsite, possibly at MIT, that is ping-able but teamspeak service is no longer working.
LLO's Operators Cannot Log in to this server either.

I reached out to Dave this morning about this, and he mentioned something right before he hung up to investigate that made me want look at the passwords page for another server name and password.
On the passwords pageI found another teamspeak server:  teamspeak3.ligo.org . Which uses the same password as the old server!
I logged in and sent a link to Elenna to see if she could log in. Dave, Elenna, and I all met in there at effectively the same time and confirmed that it works.

Current issue with this is that there is not an LHO Control Room Channel, And I dont have the Credentials to make a permanent one.
I am Currently Hanging out in the LHO COMM/OPS MEETING ROOM.
If you have any questions about this please call the Operator at the LHO Control room @ (509) 372-8204
LLO has confirmed that they are also in this new Teamspeak server.

Comments related to this report
anthony.sanchez@LIGO.ORG - 09:18, Sunday 31 March 2024 (76828)CDS
Ive been informed that Fred Donovan out at MIT, who manages that server, has corrected the teamspeak networking issue. 

We can now log back into the normal server: teamspeak.ligo.org, which has all of our normal channels.

Thank you to Fred Donovan for finding & correcting the issue and David Shoemaker for reaching out to us know the issue is resolved.

H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 08:13, Saturday 30 March 2024 (76809)
Saturday Ops Day Shift Start

TITLE: 03/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 10mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.21 μm/s
QUICK SUMMARY:
When I arrived H1 was locked  for 4 hours but not observing.
I have since taken the Observatory mode to Observing until a comissioner decides to  start running tests.

Team Speak is not working, I'm getting a Failed to Connect to server error.  I'm restarting the verbals computer now.
Please call 509 372 8204 to reach the the control room until I can figure out how to fix teamspeak.

Everything else looks pretty good. 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:00, Saturday 30 March 2024 (76808)
OPS Eve Shift Summary

TITLE: 03/30 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 06:08 UTC (56 min lock)

Lock Acquisition and Issues with PRMI Locking - Part 2

Check Midshift Update (alog 76807) for Part 1

The same weird brief PRMI behavior happened and it just couldn’t lock. I let guardian do its automation thing through MICH Fringes, but after PRMI unlocked 4 more times and couldn’t even recover PRMI from Down like last lock, I ran an initial alignment. While it was a slow initial alignment, it seemed to fix something because locking was extremely smooth (DRMI locked without PRMI).

It is worth mentioning that throughout my two lock acquisitions today, DRMI could not lock following PRMI/MICH - either DRMI locked immediately or we lost lock/got stuck some time after trying to lock PRMI.

LOG:

03:42 UTC - DRMI Unlocked, cause unknown

03:56 UTC - Lockloss at ENGAGE_DRMI_ASC after getting DRMI locked

04:52 UTC - Running initial alignment - locking getting nowhere and failing at PRMI, showing the same strange noisy trace

05:23 UTC - Initial Alignment complete - took an uncharacteristic 35 minutes

06:04 UTC - NLN Reached (41 mins from down!)

06:08 UTC - H1 is OBSERVING

Start Time System Name Location Lazer_Haz Task Time End
17:26 ops LVEA corner YES LVEA is Laser HAZARD !!!! 15:24
20:10 PEM Robert LVEA YES PEM Injections 20:43
20:44 PRCL Elenna Remote N PRCL Investigations 20:56
20:57 SQZ Naoki CrtlRm N 100/100 ASQZ 21:21
21:24 CARM Jenne W CtrlRm N Common mode board tests 21:25
21:29 SQZ Naoki CtrlRm N 125/125 ASQZ 21:52
21:53 PEM Robert CtrlRm N Shaking Injection on the input arm. 22:10
22:19 PRCL Elenna Remote N PRCL Measurements. 22:33
22:35 SQZ Naoki CrtlRm N Quiet Time no Sqz 22:51
22:52 PEM Robert CtrlRm N PEM Shaking input side @ 12.6hz 23:00
23:18 PCAL Tony, Francisco Pcal Lab Local Getting LLO Intergrating Sphere 00:18
23:47 PEM Robert LVEA YES Taking photos 00:47
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 21:02, Friday 29 March 2024 (76803)
OPS Eve Shift Start

TITLE: 03/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Locking
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 8mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.26 μm/s
QUICK SUMMARY:

IFO is LOCKING - planning to go into ovserving as soon as we get to NLN

Other:

1 Dust monitor not working properly - H1:PEM-CS_DUST_PSL101                    Error: data set contains 'not-a-number' (NaN) entries

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 20:04, Friday 29 March 2024 (76807)
OPS Eve Midshift Update

IFO is in NLN and OBSERVING since 00:46 UTC (2 hr 25 min lock)

 

Lock Acquisition and Issues with PRMI Locking

While locking, we would only lock PRMI for a 5-10 seconds before it would lose lock (not complete lock, just PRMI). Interestingly, the same wiggly behavior was seen in the multiple PRMI locklosses (Screenshot 1). Trending LSC loops more generally (there was mention of movement before our last lockloss), there seens to have been motion in PRCL and MICH 5 seconds before the PRMI lockloss. In the 9s that this particular screenshot’s PRMI lock held, there was an oscillation seen in MICH OUTand more strongly in PRCL IN (screenshot 2). No clue if this is relevant but it seems to have happened before the PRMI lockloss so may have caused it.

Seeing that the input channels had notably more fluctuation in the signal, I checked the SRCL INand MICH IN and noticed that the LSC MICH IN signal was the loudest, getting a strange kick that seems to have led into the PRMI lock but got worse as time passed. Screenshot 3 illustrates this.

After 10 or so locklosses in PRMI, we lost total lock and interestingly, upon re-locking, this issue did NOT occur. I trended the LSC behavior during the DRMI lock (which was quickly successful and didn’t actually go through PRMI) and no LSC kicks were present. This leads me to think that there may be something exacerbating the PRMI locking and relocking (in the LSC loops) that does not affect the DRMI locking maybe?

Or maybe this is just a red herring and these loops always run for PRMI locking only but were a tad more aggressive in this alignment configuration so needed a total lockloss to “try again”.

 

SDF Diffs

Per Jennie’s direction, accepted SDF diffs that corrected previously accepted incorrect sdf diffs (screenshot 4).

 

Log:

00:39 UTC - Reached NLN

00:46 UTC - IFO is OBSERVING

Images attached to this report
H1 ISC
gabriele.vajente@LIGO.ORG - posted 16:25, Friday 29 March 2024 - last comment - 12:02, Saturday 30 March 2024(76805)
PRCL to other signals TFs

[Elenna, Gabriele]

Elenna did two PRCL noise injections, at times separated by about 1h40m. We measured the transfer function to the other LSC loops and to REFL_RIN, since we observed coherence of DARM with RIN and DARM with PRCL.

The most striking observations are:

  1. most transfer functions are very smooth with 1/f or 1/f^2 shape
  2. they changed significantly between the two measurements

The transfer function PRCL_IN1 / PRCL_OUT should be a good measurement of the optical gain. The gain measured the second time is ~0.75 the gain measured the first time. So we're losing PRCL optical gain over time. Not a new story. Probably thermal effects.

PRCL to MICH and SRCL coupling got smaller.

PRCL to CHARD_P got smaller, but PRCL to CHARD_Y got larger (by a factor 3-4, depending on frequency). This might be explained if the beam spot is moving on the PRM over time, in yaw, to increase the length to angle coupling. It's interesting that this is happening in yaw and we know we have a yaw alignment problem in the PRC.

Another interesting coupling is from PRCL to REFL_RIN. Ideally we should not have any linear coupling from PRCL to REFL power. This could happen if PRCL or CARM were locked off resonance. The fact that the coupling RIN / PRCL is getting larger (by a factor 2) might indicate that the PRCL (or CARM) offset is changing over time. Probably thermal effects? Maybe also related to the change in optical gain?

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 08:57, Saturday 30 March 2024 (76810)

Following up on the PRCL to RIN coupling. From a measurement in 68899, I estimate that the PRCL actuation strenght is 1e-7 microns/cts at 4 Hz, assuming the SUS-ISCINF counts are the same as PRCL_OUT counts and that the OSEM L witness is calibrated in microns. This allows me to convert the REFL_RIN / PRCL_OUT in REFL_RIN / PRCL displacement. It is fairly flat, and in the two measurements it changed from 200 1/um to 400 1/um

From a simply double cavity model, one can compute the RIN in reflection as a function of the PRCL and CARM offsets from resonance. The actual REFL power depends a lot on the losses and reflectivity of the mirrors, and here I haven't included any sidebands. So this is at best an order of magnitude guess.

This simple model shows as expected a linear dependency of PRCL > RIN coupling with the PRCL offset. To explain the measured coupling one should have a PRCL offset between 0.025 and 0.050 nm. This seems small enough to be realistic.

My guess is that this offset is probably due to higher order modes created by the yaw misalignment in the PRC

Images attached to this comment
gabriele.vajente@LIGO.ORG - 12:02, Saturday 30 March 2024 (76815)

Using the same actuation strenght estimate, and the measured TFs from PRCL_OUT to PRCL_IN1, we can estimate the PRCL optical gain in the two cases: 2.3e6 and 1.7e6 cts/micron, where cts are measured at PRCL_IN1.

So the offsets that minimize PRCL to RIN would be 58 and 85 counts.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:14, Friday 29 March 2024 (76804)
Friday Ops Day Shift End

TITLE: 03/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Nominal_Low_Noise Reached at 20:03 UTC
The Following comissioning tasks were attempted:
    Robert
    Elenna PRCL (current doing 20mins)
    Naoki 100/100 ASQZ
    Jennie CMB OLG measure at 1h30 in NLN
    Naoki 125/125 ASQZ (20mins) until 2:50pm
    Robert (20mins) until 3:10pm
    Elenna PRCL (~10mins) until 3:30pm

    Robert (20mins)
    Camilla FF injections (maybe- could wait until we're better thermalzied Monday)
last 2 did not finish.

Lockloss at 23:00 UTC
Unknown source of lockloss, but there was a PEM group member in the LVEA "unplugging stuff" when the lockloss happened. More investigations needed.
ScreenShots attached.
LOG:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Start Time System Name Location Lazer_Haz Task Time End
17:26 ops LVEA corner YES LVEA is Laser HAZARD !!!! 15:24
15:50 PEM Robert LVEA YES Veiwport Cameras 17:50
16:12 ISC JenneW & Camilla LVEA yes Power Cycling SR785 16:24
20:10 PEM Robert LVEA YES PEM Injections 20:43
20:44 PRCL Elenna Remote N PRCL Investigations 20:56
20:57 SQZ Naoki CrtlRm N 100/100 ASQZ 21:21
21:24 CARM Jenne W CtrlRm N Common mode board tests 21:25
21:29 SQZ Naoki CtrlRm N 125/125 ASQZ 21:52
21:53 PEM Robert CtrlRm N Shaking Injection on the input arm. 22:10
22:19 PRCL Elenna Remote N PRCL Measurements. 22:33
22:35 SQZ Naoki CrtlRm N Quiet Time no Sqz 22:51
22:52 PEM Robert CtrlRm N PEM Shaking input side @ 12.6hz 23:00
Images attached to this report
H1 OpsInfo
jennifer.wright@LIGO.ORG - posted 16:11, Friday 29 March 2024 (76802)
ALS Servo Diffs

Jenne, Jennie

 

As we were about to go to Observing today, we noticed two DIFFs in observe in EYISC model.

Looking through the alog Jenne noticed that Ryan accepted these diffs the other day, but they showed up the opposite way round as diffs today meaning the guardian changed them back as we locked. Therefore the evening operator should accept these diffs to be

2 for COMBOOST and 6 for IN1GAIN.

H1 SQZ
naoki.aritomi@LIGO.ORG - posted 15:51, Friday 29 March 2024 (76800)
No sqz time

We took 10 minutes no sqz time.


PDT: 2024-03-29 15:41:17-15:51:17 
UTC: 2024-03-29 22:41:17-22:51:17
GPS: 1395787295-1395787895

H1 ISC (ISC, PEM)
jennifer.wright@LIGO.ORG - posted 15:24, Friday 29 March 2024 (76797)
Checked CARM OLG - 12 kHz

Camilla, Jennie W

Camilla and I measured the CARM loop after restarting the SR785 earlier.

We followed instructions here.

The OLG seems similar to two days ago after Sheila decreased the gain.

Our measurement was done about 1 hr and 17 mins into lock so while we were thermalising.

The measurement Sheila did was done about 2 hrs and 19 mins into lock.

Maybe we should measure this again during a well-thermalised lock so we will aim to do this on Monday.

Non-image files attached to this report
H1 SQZ
naoki.aritomi@LIGO.ORG - posted 15:14, Friday 29 March 2024 - last comment - 15:57, Friday 29 March 2024(76798)
PSAMS change to 125/125

Naoki, Nutsinee

We changed the PSAMS from 100/100 to 125/125. We ran the SCAN_ALIGNMENT with asqz-optimized. The result is here.

https://lhocds.ligo-wa.caltech.edu/exports/SQZ/GRD/ZM_SCAN/240329142738/

The asqz and sqz time for 125/125 PSAMS is as follows.

asqz
PDT: 2024-03-29 14:37:25-14:42:25
UTC: 2024-03-29 21:37:25-21:42:25
GPS: 1395783463-1395783763

sqz
PDT: 2024-03-29 14:47:30-14:52:30
UTC: 2024-03-29 21:47:30-21:52:30
GPS: 1395784068-1395784368

Comments related to this report
nutsinee.kijbunchoo@LIGO.ORG - 15:57, Friday 29 March 2024 (76801)

Here's a PSDs of PSAMs 100/100 compared to 125/125. We reverted the PSAMs settings back to 100/100 after the test. ZMs alignments have been reverted back to when we had 100/100 (post SCAN_ALIGNMENT sctript). SQZ angle reverted to last night's lock (184 degree). We also put ZM4, 5, 6 slider bar back to MONITOR list. Changes of the slider values have been accepted in the SDF.

Images attached to this comment
H1 TCS
camilla.compton@LIGO.ORG - posted 12:28, Friday 29 March 2024 - last comment - 15:24, Thursday 10 October 2024(76794)
10kHz HOM in ER16 simular to O4a

Plot attached showing that the Higher Order Modes around 10.4 to 10.6kHz are in a similar position during ER16 (blue) that they were at the end of O4a (yellow).

This is expected as we have not changed the TCS system (CO2 and RH settings the same). Unsure why in the last long (2024/03/28) they are larger, the noise floor changes with SQZ but wouldn't expect the peaks to change much.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:46, Friday 29 March 2024 (76796)

Vicky, Camilla. Attached plot has comparisons of No SQZ times, the noise floor at 10kHz seems to have decreased since O4a. Using times in Jennie's 76516 show similar noise floors.

Images attached to this comment
camilla.compton@LIGO.ORG - 15:24, Thursday 10 October 2024 (80600)

Late alog: After the Emergency vent and OFI crystal swap in July/August, in late August we saw that the two HOM peaks had moved into a single peak. Plot: Blue = pre vent in July, Brown and green = Post vent in August. 

Dan Brown made the quick model attached showing that the IFO to OMC mismatch does have an impact on the height of those peaks. From Dan: The plot looks at mode mismatch change and how laser frequency to DCPD coupling scales with it. If changing the OFI has modified the mode-matching or more likely just reduced HOM scattering then seeing smaller peaks could be due to that.

Images attached to this comment
H1 SQZ
naoki.aritomi@LIGO.ORG - posted 16:33, Thursday 28 March 2024 - last comment - 14:31, Friday 29 March 2024(76777)
PSAMS change to 100/100

We changed the PSAMS from 150/90 to 100/100 and ran the SCAN_ALIGNMENT with sqz-optimized. We will leave this PSAMS setting tonight. The result of SCAN_ALIGNMENT is here.

https://lhocds.ligo-wa.caltech.edu/exports/SQZ/GRD/ZM_SCAN/240328143140/

Comments related to this report
naoki.aritomi@LIGO.ORG - 14:31, Friday 29 March 2024 (76795)

We ran the SCAN_ALIGNMENT with asqz-optimized. The result is here.

https://lhocds.ligo-wa.caltech.edu/exports/SQZ/GRD/ZM_SCAN/240329140112/

The quiet asqz time for 100/100 is as follows.

PDT: 2024/3/29 14:11:30-14:21:30
UTC: 2024/3/29 21:11:30-21:21:30
GPS: 1395781908-1395782508

H1 SQZ
naoki.aritomi@LIGO.ORG - posted 16:22, Thursday 28 March 2024 - last comment - 16:57, Friday 29 March 2024(76776)
SCAN_SQZANG state

Naoki, Vicky, Sheila, Camilla

To scan the sqz angle to optimize the squeezing in the bucket, we copied the SCAN_SQZANG state in SQZ_MANAGER guardian in LLO. This state will find the optimal sqz angle to minimize the BLRMS3 at 350 Hz. We replaced the 0.1 Hz LP with 1 Hz LP for BLRMS3. We will test this state tomorrow.

Comments related to this report
naoki.aritomi@LIGO.ORG - 16:57, Friday 29 March 2024 (76806)

We tested the SCAN_SQZANG state and it seems to work. The result is saved here.

https://lhocds.ligo-wa.caltech.edu/exports/SQZ/GRD/SQZANG_SCAN/

H1 ISC
gabriele.vajente@LIGO.ORG - posted 07:51, Wednesday 27 March 2024 - last comment - 12:47, Thursday 04 April 2024(76736)
Lock losses while moving input beam

All three times we tried to move the input beam (76534, 76607 and yesterday), we caused a lock loss when moving in the yaw direction.

Approximate lock loss times: 1394946678 1395096385 1395535842

All lock losses appear to show the same behavior:

Those lock losses are indeed very fast, and this seems to point to a CARM problem.

Images attached to this report
Comments related to this report
gabriele.vajente@LIGO.ORG - 15:12, Friday 29 March 2024 (76799)

Some smart person suggested t check the ISS loops as the cause of those lock loss. It looks like this is the culprit: the ISS second lop is the first one to go crazy before each of those three lock losses. The error and control signals both go away from their trend value when we see the first jump in IMC transmitted power.

So maybe the ISS second is very marginal now.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 12:47, Thursday 04 April 2024 (76954)

Agreed, here are some more plots, it looks like we are probably saturating the AOM when we unclip the beam going into the second loop ISS array. 

Also, it is interesting that looking at these times the out of loop PD power seems to increase as we move the input pointing (shown in the last plot, but similar for all three of these).

Edit:  Keita suggested looking at indivdual PDs on the ISS, indeed the indivdual PDs are moving in different directions. 

Images attached to this comment
Displaying reports 12001-12020 of 86581.Go to page Start 597 598 599 600 601 602 603 604 605 End