Displaying reports 12701-12720 of 86500.Go to page Start 632 633 634 635 636 637 638 639 640 End
Reports until 15:46, Tuesday 27 February 2024
H1 General (Laser Transition)
anthony.sanchez@LIGO.ORG - posted 15:46, Tuesday 27 February 2024 (76009)
LVEA LASER TRANSITION to LASER SAFE

The LVEA has been Transitioned to LASER SAFE.
Work Permit: 11728

H1 SQZ (SQZ)
nutsinee.kijbunchoo@LIGO.ORG - posted 14:59, Tuesday 27 February 2024 (76006)
FCGS path in SQZT0 is in good health

Dhruva Swadha Nutsinee

We have ~9 mW pickoff from the pump path into FCGS path. ~5 mW of the 1st order beam transmits through the first AOM (GAOM2). 4.4 mW of the 1st order passes through the second (GAOM3), 4.35 mW measured after EOM3. We only touched the alignment using the steering mirrors and the waveplates. Dhruv suggested FC_REFL_LF_OUTPUT*1000 should be higher than 200 counts. We had about 300 counts when we closed out. There were already irises on the table to help pick out the first order and all the beam went through. We measured ~0.4 mW hitting FCGS REFL on SQZT7. We also recalibrated the FCGS REFL diode (responsivity was 0.3 A/W, now 0.22 A/W).

 

I think we're done with SQZT0 *finger cross*

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 14:47, Tuesday 27 February 2024 (76002)
PSL PMC and FSS On-Table Alignment (WP #11716)

R. Short, J. Oberling

This morning, we took the opportunity to fix up the alignments of the PSL PMC and FSS path before the observing run resumes. We started by tweaking the beam alignment into the PMC remotely using the two picomotor-controlled mirrors with the ISS off. After maximizing the PMC transmitted signal, we experimented with adjusting the pump diode currents in both amplifiers to improve the mode-matching into the PMC. Ultimately, we ended having increased both pump currents in AMP2 by 0.1A (from 8.7A to 8.8A).

Having improved PMC alignment, we then went into the PSL enclosure to touch up the FSS path alignment. We started with a power budget on the FSS path (done with the ISS on and both PSL Guardians paused):

The most obvious issues appeared to be the single and double passes through the AOM. We adjusted the AOM in both pitch and yaw to improve single pass diffraction, and mirror M21 to improve double pass diffraction. Our results:

Good improvements all around, although neither diffraction efficiency ended as high as after the last alignment (alog74346). During this adjustment period, the PMC kept unlocking periodically due to the temperature drifting. We tried turning the temploop on and off at different points to keep the PZT in the range we wanted it, but the PMC kept unlocking on us, causing this step to take longer than anticipated.

We then checked the alignment through the EOM; seeing it was good with no clipping and measuring 152 mW out, we locked the RefCav and started recovery of its alignment. Using the picomotor-controlled mirrors while watching the signal on the TPD:

Next, we touched up the alignment onto the RefCav RFPD by adjusting mirror M25 and using a multimeter to watch the DC voltage:

To finish off our activities in the enclosure, we measured the RefCav's visibility:

After leaving the enclosure, we resumed the PSL Guardians and are seeing that the PMC PZT signal is much more stable now that the temperature has stabilized, so the ISS is back on as well. I adjusted the ISS RefSignal to -1.95 to bring the diffracted power to just below 2.5%.

A rotation stage calibration will need to be done since the output power of the PMC has changed, but this will be done as a target of opportunity when the IMC can be unlocked in the coming days.

I did not get the chance to run quarterly tests on the PSL dust monitors, but otherwise, this work closes WP 11716.

H1 AOS
jason.oberling@LIGO.ORG - posted 14:46, Tuesday 27 February 2024 - last comment - 14:04, Friday 01 March 2024(75974)
FARO Progress Update

J. Oberling, R. Crouch

Update on FARO work during the O4 commissioning break.  Previous updates at the following alogs (with associated comments): 75669, 75771.

Since the last progress update we've been testing our FARO X/Y alignment routines and attempting to re-establish Z=0 based on the door flange scribes on BSC2.  We've been navigating Laser Safe/Hazard transitions, as we can only do optical surveying (like using an autolevel for our BSC2 survey) during Laser Safe; the FARO is usable during Laser Hazard so we've been using these windows for FARO work.

FARO X/Y Alignment Testing

As a means of testing the repeatability of the FARO's X/Y alignments we have been using the brass monuments for mechanical test stand #2 (TS2) in the West Bay of the LVEA.  The FARO gives us a global X/Y coordinate for these monuments based on our alignment (which is also a local X/Y coordinate since XG=XL and YG=YL), which we can use to compare the Measured Local LVEA coordinates to each other and test the repeatability of different FARO alignments.  In addition, each test stand has a monument that represents the [0,0] of the test stand (monument TS2-10 for TS2). We can therefore subtract the local LVEA X/Y coordinate for the [0,0] monument from each measured test stand monument to translate from Local LVEA coordinates to Local Test Stand coordinates.  With this translation we can also compare the monument coordinates measured by the FARO to where we think they are via their as-designed coordinates (designed test stand monument coordinates taken from D1100291).

The results are shown in the attached .pdf file 'FARO_XY_Alignment_Test_TS2_Monuments.pdf'; I have also attached the reports generated from PolyWorks for each of our surveys.  To date we have done this with 3 separate alignments:

Alignments 1 and 2 give us insight into how using different feature types (points vs spheres) for our alignment monuments cause variations in the alignment.  Alignment 3 was used to give some insight into the repeatability when the same alignment feature types are used with 2 different alignments (in this case 'All Spheres' vs 'All Spheres').  The first 3 pages of the results pdf file detail the measurements of the TS2 monuments, the conversion to Local Test Stand coordinates, and a comparison of the measured test stand coordinates to the as-designed ones; 1 page is used for each alignment.  The final page compares the 3 alignments to each other, both in Local LVEA coordinates and in Local Test Stand coordinates.  Some thoughts:

We're still digesting this.  I'm intrigued by the measured test stand coordinates for the monuments in line with each other.  For example, TS2-1 is supposed to be directly in line with TS2-4, only separated along the test stand's Y axis; this is the same for the group TS2-2, TS2-10, and TS2-5, as well as the group TS2-3 and TS2-6.  All 3 alignments show these monuments being at an angle with each other, and a similar angle at that; almost like the line from TS2-2 to TS2-5 (which also intersects TS2-10) was not straight when these monuments were laid out, and that carried over in the setting of the monument groups to the sides of this line (TS2-1/TS2-4 and TS2-3/TS2-6).  I will say that I find the deviations between FARO measured and as-designed test stand monument coordinates particularly worrying; whether that's due to an error in the FARO alignment or an actual error made when these monuments were first laid out I can't yet say, some more investigation is required (could do something like use a 100' survey tape to measure distances between monuments and compare to the FARO measurements).  Also, I would like to to set up a new Sphere+Points alignment to see if using the point alignment feature improves the repeatability; as I've said a few times in the previous alogs, we suspect that the sphere fit routine and the limitations of the sphere fit rods are introducing error into the FARO alignment, and the above alignment comparisons appear to support that at first glance.  I'm interested to see if using points instead of spheres improves this, but we need a new alignment to compare to the old Sphere+Points alignment.

BSC2 Z=0 Water Level Survey

Based on the results of our FARO work detailed in alog 75771, we want attempt to re-establish Z=0.  This was originally done by averaging the 8 door flange scribes of the BSC2 chamber (1 and 3 o'clock and 1 and 9 o'clock on each of the 4 door flanges).  With all of the line of sight blockers (beam tubes, other chambers, electronics racks, cable runs, etc.) we felt the easiest way to repeat this was to use a water tube level.  To do this we used roughly 60' of flexible tubing with an 8mm OD and 6mm ID.  We filled it with water (setting up a siphon works great for keeping air bubbles out of the tube), leaving some air at each end, and set up around BSC2.  One end of the level was fixed to the unused HEPI pier for BSC8, with a scale attached nearby for measurements; the other end was placed along the door flange scribe under measurement.  We used an autolevel to set the water line on the scribe line to be measured, then used a 2nd autolevel to sight the other end of the tube and take a reading on the scale.  We ended up using several rubber bands and some tape to secure the tube to the door flange; the tape was necessary to keep the tube from sagging under the weight of the water (the BSC scribes are over 6' above the ground), while the rubber bands helped to keep it mostly secure while we were setting it on a scribe line.  The first 3 pictures show the setup, with the third one taken through an autolevel to show a close up of the water in the tube (have to sight at the bottom of the meniscus, just like with a graduated cylinder or similar measurement devices (like glass measuring cups in your kitchen)).

We did have a few issues, chief among them being that we could not get the water in the tube to stop moving at first.  We would set the water line on a door flange scribe and watch it settle, and it would keep dropping slowly over several minutes.  We noticed that regardless of where we set the water level, it would always drop to the same point; what finally clued us in to the issue was noticing that the other end of the water level was also dropping.  If the level were rebalancing we would expect one end drop while the other raised, but this was not the case.  At this point we also noticed that, even though we left about 12" of air at each end of the tube when we initially filled it, we now had almost 2' of air at each end.  The solution?  Not enough water in the tube, so add some more.  We did this and all the stability problems vanished.  We could then set the level on a scribe line, and after just a few seconds it would settle out and be very stable.  Best explanation I have is we didn't have enough water to account for the slight compression of the water column at both ends of the level, since our measurement point was over 6' off the ground.  With only a 6mm ID on the tubing, it doesn't take much to cause a big difference in how the level behaves.  By adding ~9mL of water to the tube (using a 2mL transfer pipette) all of our problems were solved.  Second issue, don't step on or touch the water level once set.  This causes the water in the tube to move, a lot.

The other big issue we had is sighting the correct scribe line on the door flanges.  Over the years since site construction several additional scribe lines have been added to many of the door flanges, all within several mm of each other.  Most have no markings on them, a few had arrows, but 1 scribe on each flange was marked with 3 punch marks; this was also true for the 3 flanges with only 1 scribe on them.  So we sighted the scribe line marked by the punches on all door flanges.  The 4th picture shows an example of these punch marks (there are 2 scribe lines in this pictures, one that is straight and one that is not; we used the one that is straight, which can be seen behind the autolevel cross hairs).

With our scribe lines chosen and other issues figured out, we set about measuring all 8 of the BSC2 door flange scribes.  The final picture is a shot of my notes from the survey.  Notice the large separation for the -X door scribes.  Mike Zucker indicated to us that he thinks the scribes were placed to within +/- 1mm of flange center (having a hard time finding documentation of this, he is currently looking for the old "end item data package" for the chambers from their initial construction in the 90s), so this 11.3mm separation in particular is puzzling (we also measured a 4.6mm separation for the +Y door, 1.3mm separation on the -Y door, and 0.5mm separation on the +X door).  One thing he suggested we can do is use a flat survey tape to check that the scribes are on a true diameter of the flange (are they 1/2 circumference apart?), which we will do once we have Laser Safe again.  Once we confirm we've used the correct scribe lines we will continue with using the average of these scribes to check the various height marks around the LVEA.  Should we find that we don't have the correct scribes then we will have to repeat the water level survey.

Images attached to this report
Non-image files attached to this report
Comments related to this report
corey.gray@LIGO.ORG - 11:40, Wednesday 28 February 2024 (76029)EPO

Tagging EPO for FARO pics.

jason.oberling@LIGO.ORG - 14:04, Friday 01 March 2024 (76075)

J. Oberling, R. Crouch, R. Short

Ryan S. and I went out yesterday, 2/29, and used a flat survey tape to measure the distance between the 3-punch scribe marks along the circumference of the 4 BSC2 door flanges; the survey tape has 1.0 mm tick marks, so best we can measure to is the closest 0.5 mm.  If these scribes are the correct ones to use then they should be 1/2 circumference from each other, which would mean the difference we measured with the water level are due to the flanges being clocked when the chamber was built.  We had to do some DCC spelunking to find the correct OD for the BSC door flanges.  Ryan C. found D970412, which eventually led to D961102.  This document is the Release for Quote for the BSC door flanges, so not an as-built, but it's the best we've been able to find so far so I'm going with it.  D961102-04.pdf lists the OD of the BSC door flange as 68.50 inches.  Converting that to mm and calculating the 1/2 circumference gives us 2733.0 mm.  Our measurements from yesterday:

  • +Y door: 2732.0 mm
  • -Y door: 2734.5 mm
  • +X door: 2734.0 mm
  • -X door: 2733.0 mm

So the 3-punch scribes on all door flanges meet the expected 1/2 circumference of 2733.0 mm to within +1.5/-1.0 mm.  We have yet to find any kind of documentation or spec for these scribe lines, so I can't definitively say to what tolerance they were supposed to be placed to, but I've been told +/-2.0 mm in the past and our measurements appear to meet that.  To me this says that Ryan C. and I used the correct scribe lines during our water level survey, but the flanges were unexpectedly clocked w.r.t. local horizontal.  This in turn does give us an average across those 8 scribe lines that we can use to start measuring height marks to see if we can identify the source of the Z axis discrepancies the FARO has been reporting.  Ryan C. and I will begin doing this during upcoming Tuesday maintenace windows as both of our schedules allow.

H1 CDS
david.barker@LIGO.ORG - posted 14:14, Tuesday 27 February 2024 (76004)
digital video camera power cycles

Tony, Jonathan, Erik, Dave:

Following the network upgrade last week we found some corner station cameras had issues:

Some were pingable from the 10.106 vlan but not from 10.22 (one even came back briefly on 10.22 then went away again).

Most of the above had no image, but one did.

In all cases I power cycled the camera and restarted the server process. All then provided images except h1cam21 (ITMX).

Cameras power cycled are:

h1cam01 (FC REFL, referred to as FCGS in the switch config)

h1cam02 (MC Refl)

h1cam04 (ALS X)

h1cam05 (ALS Y)

h1cam21 (ITMX) no image

h1cam24 (ITMY Green)

h1cam26 (BS)

h1cam30 (POP Air)

H1 General
anthony.sanchez@LIGO.ORG - posted 13:30, Tuesday 27 February 2024 (76003)
Tuesday Ops mid Shift report.

TITLE: 02/27 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 11mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY:

Rahul, has completed the Tranfer Functions, and Fil has completed the grounding loop tests on HAM8.
The HAM8 Door crew has been assembled in the HAM Shaq to start putting the HAM8 door back on.
PSL Ref Cav Alignment is complete and the PSL Team is out of the PSL enclosure.
The SQZ Team no longer needs the LVEA to be Laser hazard.

 

H1 CAL
louis.dartez@LIGO.ORG - posted 12:46, Tuesday 27 February 2024 (76000)
On Calibrating H1:OMC-DCPD_SUM_OUT_DQ to Strain
Vicky mentioned to me that she was having trouble using pyDARM to calibrate H1:OMC-DCPD_SUM_OUT to strain for the noise budget. I still need to take a closer look to how strain is computed in the noise budget but I have put together some example scripts that should make it easy to do.

To convert the OMC-DCPD_SUM_OUT signal to strain you need to do the following:

1.) convert OMC-DCPD_SUM_OUT to units of DARM_ERROR (or DARM1_IN1) from units of mA
2.) multiply the result by the response function provided by pyDARM (informed by the most recent calibration report)
3.) divide the product by the mean length of the arms 

For discussion on steps 2 and 3: see Eq. (4) in the O3b cal paper.

Step 1 above is necessary because LHO has linearization logic in between the raw DCPD readout and the output of the LSC Input Matrix. For discussion on this, see G1700316. In particular, I am referring to everything from the input of the block named "Power Normalization" to the output of the block named "LSC Input Matrix" along the red path shown on Slide 18.

To estimate the effect of this signal path we can take a transfer function: H1:LSC-DARM1_IN1_DQ/H1:OMC-DCPD_SUM_OUT_DQ.

I've written some Python code to measure H1:LSC-DARM1_IN1_DQ/H1:OMC-DCPD_SUM_OUT_DQ and to calibrate a H1:OMC-DCPD_SUM_OUT timeseries into an asd with units of strain. While I hope to get these tools integrated with pyDARM soon, I've placed them in a utility repo for now: https://git.ligo.org/louis.dartez/pydarm-utils.

In particular, the function to estimate the tf that converts an OMC-DCPD_SUM_OUT signal into units of DARM counts is here: https://git.ligo.org/louis.dartez/pydarm-utils/-/blob/main/pydarm_utils/measure/cal.py?ref_type=heads#L7. And the function to calibrate a OMC-DCPD_SUM_OUT timeseries into an asd in units of strain is here: https://git.ligo.org/louis.dartez/pydarm-utils/-/blob/main/pydarm_utils/util/strain.py?ref_type=heads#L6.

To demonstrate that this works, I've attached an example asd plot (dcpd_sum_asd.png) that shows strain as taken from GDS-CALIB_STRAIN, DELTAL_EXTERNAL/L, and from OMC-DCPD_SUM_OUT overlaid on each other. I've also included plots of the H1:LSC-DARM1_IN1_DQ/H1:OMC-DCPD_SUM_OUT_DQ transfer function (dcpd_sum_darmin1_tf.png) and its coherence (dcpd_sum_darmin1_coh.png) to show that 1.) it's pretty flat and 2.) has good coherence below 100Hz.

The times I used for testing were passed to by Vicky.
gps start time: 1387130434
gps end time: gps_start + 600

Being able to measure H1:LSC-DARM1_IN1_DQ/H1:OMC-DCPD_SUM_OUT_DQ is pretty important because it can fluctuate with time and is under no expectation to remain constant between lock stretches.

For my tests, I estimated the H1:LSC-DARM1_IN1_DQ/H1:OMC-DCPD_SUM_OUT_DQ value to be: 4.0568e-7. This was calculated at the same time as above.
Images attached to this report
H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 11:50, Tuesday 27 February 2024 - last comment - 08:48, Thursday 29 February 2024(75999)
OPLEV charge measurements - ETMX and ETMY

This morning after Fil switched ON the ESD HV on EX (EY was already ON), I took the OPLEV charge measurements on ETMX and ETMY. I will post the results after processing the data.

Both the suspensions were restored after the measurements were complete.

Also, I noticed that on ETMX the amplitude of L3 calibration lines was set to zero since Jan 16, 2024 (when last measurements were taken), I have set it to the nominal value of 0.12 after trending it.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 08:48, Thursday 29 February 2024 (76042)SUS

I ran the analysis this morning for the OPLEV charge measurements that Rahul took on Tuesday.

For ETMX, the charge is trending down towards zero on all DOF & Quads!

For ETMY, the charge appears to be mostly stable around zero with some small uptrends on LL & UL and only on LL (P&Y) the charge is just above 50[V].

Images attached to this comment
H1 TCS
camilla.compton@LIGO.ORG - posted 10:59, Tuesday 27 February 2024 - last comment - 15:23, Tuesday 27 February 2024(75997)
TCS CO2 and HWS lasers turned back on

HWS and CO2 lasers turned back on this morning. I swapped the HWS camera power supply cabling back to the "quieter" ungrounded verison (75876).

Comments related to this report
camilla.compton@LIGO.ORG - 15:23, Tuesday 27 February 2024 (76007)

CO2Y came back with less power than expected (388W when we had 42W before the break), TJ and I increased the temperature step point and we now have 40.6W out of the laser head. Plot attached. We might think about slowly stepping the temperature to see if we can find a better operating point.

This is not worrying: in 65277 we show  can inject ~16% of power out of laser into the vacuum through the annular mask. Expecting we continue observing with 1.7W, we only need ~10W from the head.

Images attached to this comment
camilla.compton@LIGO.ORG - 12:29, Tuesday 27 February 2024 (75998)

TJ and I restarted HWS code on both ITMX and ITMY with camera frequency 1Hz for X, 10Hz for Y (lowest frequency that doesn't saturate pixels).  

New references taken. Ring heaters are at their nominal settings, 0.44W/segment on ITMX, 0W on ITMY.

We had to restart h1msr1 and try starting the code multiple times to avoid a segmentation fault in ITMY.  We checked the wave fronts and removed a bad pixel from ITMX.  ITMY crashed after running for ~10 minutes, we restarted it but we should check on it later.

Attached is the HWS plot of the location of CO2Y. 

Images attached to this comment
H1 CDS
filiberto.clara@LIGO.ORG - posted 10:39, Tuesday 27 February 2024 (75995)
ESD High Voltage - EX

The ESD HV power supplies and Low Voltage Chassis were powered on at EX. Needed for charge measurements.

H1 ISC (CAL, CDS, SQZ)
jeffrey.kissel@LIGO.ORG - posted 10:34, Tuesday 27 February 2024 - last comment - 15:51, Friday 01 March 2024(75986)
OMC DCPD GW Signal Path Electronics and Compensation Response Changes a Little Below 25 Hz after OMC Swap
J. Kissel, L. Dartez

%%%%% Executive Summary %%%%%%
Remeasured OMC DCPD electronics chain electronics, including compensation, post Jan/Feb OMC swap. There's a small, 0.3% drop in magnitude below 25 Hz. The first line of suspicion is that the environmental or electrical conditions surrounding the new style of transimpedance amplifier, even though the circuit and enclosure itself hasn't changed, but the investigation has just started.

%%%%% More Info %%%%%%

As y'all know, we swapped out the OMC in Jan / Feb 2024 (see highlights in LHO:75529). 
That means we have brand new gravitational wave DCPDs. However, it's *only* the DCPDs that have changed in the GW path. Remember, as of O4, the PD's transimpedance amplifier (TIA) is now inside a separate podded electronics box () that encloses a brand-new style of TIA (see T1900676 and G2200551). This need not -- and hasn't -- changed with the swap, where it used to need be changed because the TIA was built in to DCPDs in pre-O4 OMCs. 

So, in principle, we've "just" disconnected the old PDs, and reconnected the new PDs, to the same electronics. As such we don't *expect* the frequency response of the signal paths to change.

However, Keita reports, for the first time in history, that there're no electrical issues with the OMC sensors after the OMC swap in January (LHO:75684). While there have not been issues with the DCPDs themselves, per se, recall, for example, problems in the past including issues with shorts to electrical ground of the OMC's PZTs (IIET:12445). Keita did report that, during this Jan/Feb 2024 vent that he found and mitigated some grounding issues with the preamp though -- the 3 MHz SQZ side-band pick off of the gravitational wave DCPDs had shown some signs of electrical short to ground. Quoted from LHO:75684:
    Inside the chamber on the in-vac preamp, the DB25 shell is connected to the preamp body (which is isolated from the ISI via PEEK spacer). At first DB25 shell and the preamp body was shorted to the ISI table, but this turns out to be via 3MHz cable ultimately connected to the in-air chassis. As soon as both of the 3MHz cables were discunnected from the in-air chassis, preamp body as well as the DB25 shell weren't conducting to the ISI table any more.
I interpret this to mean that there's a *potential* that the electrical grounding on board the OMC and in the GW signal path of the TIA *has* changed from "there used to be an issue" to "now there is no issue."

So with uber-careful, precision calibration group hat on, I repeated the remote, full-chain measurements of the OMC DCPD GW path -- including the digital compensation for their frequency response -- that I took on July 11 2023 -- see LHO:71225.

Attached are the results -- the magnitude of the transfer function -- for DCPD A and DCPD B. There are three traces:
    - The former measurement with the previous OMC DCPDs, on 2023 Jul 11.
    - The first measurement with the new OMC DCPDs connected, on 2024 Feb 22 (last week Thursday)
    - The second measurement with the new OMC DCPDs connected, on 2024 Feb 26 (yesterday, 4 days later)

We do see some small change ([3.05e6 / 3.04e6 - 1]*100 = 0.3% reduction) in the magnitude below about 25 [Hz].

Preliminary investigations cover a few things that might cause this. Because of where the "wiggle of change" is happening at 25 [Hz] -- right at the RLC complex poles, I immediately suspect the environmental sensitivity of giant ~2.4 [Henry] inductors and/or the electrical grounding situation surrounding the TIA. 

Regarding the environmental situation:
    - The OMC and HAM6 are back mostly at ultra high vacuum (~1e-6 [Torr], when its typically 1e-7 [Torr]) :: (so, any physical distortions of the enclosure that would change the geometry of inductor should be similar) 
    - The TIA has been powered on for several days even prior to the 2024 Feb 22 measurement :: (so the dominant thermal load on the circuit -- the bias voltage -- should have had time to thermalize), and 
    - LVEA temperatures are stable, albiet 2 [deg C] cooler  :: (I'm not sure if a 2 [deg C] in the external environment will have such an impact on the PDs)

Of course, it's an odd coincidence that both DCPDs chain response changed in the same direction and magnitude -- maybe this is a clue.
The fact that the 2024-Feb-22 and 2024-Feb-26 measurements agree indicate that:
    - The change is stable across a few days, implying that
    - The TIA circuit has been on for a while, and circuit is thermalized

Also attached are trends of these environmental conditions during the 2023-Jul-11 and both 2024-Feb measurements.

Also also attached are the two relevant MEDM screens showing the OMC DCPD A0 filter bank configuration during the DCPD A measurement (OMC DCPD B0 is the same), and the Beckhoff switch states for the excitation relay in the TIA and the whitening gain relay in the whitening chassis.

%%%%% What's next? %%%%%%
    (1) Ask Keita / Koji / Ali some details about the DCPD chain that I've missed having been out.
        (a) Are you sure you plugged in the transmitted PD into DCPD A and the reflected PD into DCPD B, the configuration we'd had with the previous OMC?
        (b) When were the electronics powered on?
        (c) Can you confirm that other than the DCPDs and the cable connecting them to the TIA, no electronics have changed?

    (2) Using the same remote measurement, configure the system to measure the TIA response by itself to see if there's a change and if so if it matches this overall chain change.

    (3) If (2) doesn't work, use the remote measurement tool to measure the TIA and the Whitening together, take the ratio of (3)/(2) to see if the whitening chassis response has somehow changed.

    (4) If the answers to (1), (2), or (3) don't solve the mystery, or provide a path forward, then we ask "does this *matter*?" A reminder -- any change in the frequency dependence of the OMC DCPD GW path electronics that's not compensated is immediate and direct systematic error in the overall DARM calibration if not accounted for. So the question is does 0.3% error below 25Hz matter, or is it beneath the uncertainty on the systematic error in calibration that's present already for other reasons? To answer this question, we'll resurrect code from G2200551, LHO:67018, and LHO:67114 which creates an estimate of the impact on the calibration's *response* function systematic error, i.e. creating an "eta_R." 

    (5) If the resulting estimate of eta_R is big compared with the rest of systematic error budget, then it matters, and we're left no other course of action than to out to the HAM6 ISC racks with our trusty SR785, remeasure the analog electronics from scratch, fit the data, and update the compensation filters a la LHO:68167.
        
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 15:51, Friday 01 March 2024 (76077)
Here's the debrief I received from Koji and Keita:

(a) Are you sure you plugged in the transmitted PD into DCPD A and the reflected PD into DCPD B, the configuration we'd had with the previous OMC?

Koji says ::
    The now installed OMC is so-called Unit 1.
    - 40m eLOG 18069 covers the PD installation
        . The PD in transmission is B1-22
        . The PD in reflection   is B1-23
    - PD datasheet vendor provided can be found in E1500474

    - Test Results for the OMC and its PDs can be found in E1800372

(b) When were the electronics powered on?

Keita says "The TIA was only briefly powered down and disconnected from its in-air whitening chassis while I was checking for connection to electrical ground. Otherwise it has been powered on."

Given that doors were closed on 2024-Feb-07 (see LHO:75811 and LHO:75810), the TIA would have been powered on for at least 15 days prior to my first measurement on 2024-Feb-22.
So we can rule out that this discrepancy might have been because the electronics had not yet have been at thermal equilibrium.

(c) Can you confirm that other than the DCPDs and the cable connecting them to the TIA, no electronics have changed?

According to Appendix D of T1500060 (P180) the former, now de-installed, H1 OMC from Aug 4, 2016 (aka Unit 3) had the onboard cable twisted.
Comparing this with the past LLO unit (aka Unit 1, now installed to LHO), I expect that the role of DCPD A and B are now swapped from the previous OMC (Unit 3).
H1 CDS (SUS)
filiberto.clara@LIGO.ORG - posted 10:32, Tuesday 27 February 2024 (75994)
HAM8 Ground Loop Checks

D1900451
E2100504
T1200131

HAM8 ground loops checks completed. All passed. Some cables previously modified per E2100504. This was to resolve known in-chamber ground issues for FC2 Top (BOSEM).

Description     Cable         Location               Notes
FC2 Top         FC2_001    FCES SUS-C1     Pin 13 and shield lifted per E2100504
FC2 Top         FC2_002    FCES SUS-C1     Pin 13 and shield lifted per E2100504
FC2 Middle    FC2_003    FCES SUS-C1     Tested ok
FC2 Bottom   FC2_004    FCES SUS-C1     Tested ok

LHO VE
david.barker@LIGO.ORG - posted 10:18, Tuesday 27 February 2024 (75992)
Tue CP1 Fill

Tue Feb 27 10:12:02 2024 INFO: Fill completed in 11min 57secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 PSL
austin.jennings@LIGO.ORG - posted 09:46, Tuesday 27 February 2024 (75990)
PSL Weekly FAMIS

Closes 26233, last done in 75700


Laser Status:
    NPRO output power is 1.807W (nominal ~2W)
    AMP1 output power is 67.22W (nominal ~70W)
    AMP2 output power is 141.2W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN

PMC:
    It has been locked 0 days, 0 hr 6 minutes
    Reflected power = 16.3W
    Transmitted power = 108.3W
    PowerSum = 124.6W

FSS:
    It has been locked for 0 days 0 hr and 0 min
    TPD[V] = -0.01492V

ISS:
    The diffracted power is around 3.3%
    Last saturation event was 0 days 0 hours and 5 minutes ago


Possible Issues:
    FSS TPD is low
    ISS diffracted power is high

 

All looks nomial except for the PMC and FSS not being locked for a long duration of time, but this is expected with the ongoing in chamber PSL work.

H1 ISC
betsy.weaver@LIGO.ORG - posted 16:05, Monday 26 February 2024 - last comment - 10:29, Tuesday 27 February 2024(75977)
ISC HAM6 cameras realigned at ports

Today we managed to realign the cameras on HAM6 which show OMC TRANS and AS AIR.   At first we could not see any response from the AS AIR camera so there were some cable swapping back and forth, reseating of cables and a few reboots by Dave B/control room.  At some point it started working again.

In any case, beams are now on these cameras.

Comments related to this report
jennifer.wright@LIGO.ORG - 18:14, Monday 26 February 2024 (75979)

Vicky, Jennie, Swadha,

After the camera was aligned we realised the 1mA mode Vicky and I were locking the OMC to on Friday was a 1/0 mode. This explains why we were having problems getting the yaw ASC on the OMC to converge  as the cavity alignment was off in yaw.

We then scanned the PZT2 until we saw TM00 on the camera and could see error signals on the input to the LSC servo then tweaked the OM1 and OM Sus alignment until the mode height was maximised at 2mA.

We were not sure if ASC was working - even with a nice TM00 mode and all the offsets on the QPDs altered so the outputs of each ASC_QPD_{A,B}_{PIT,YAW}_{OUTPUT} were 0. We also tried gain flips and we found a stable situation for ASC, but didn't check if it was really locked. The stable situation (ASC not railing) was sign flips on both yaw gains for the OMC-ASC_{POS,ANG}_X_GAIN and OMC-ASC_{ANG}_Y_GAIN. We reverted the changes we made except for OMC-ASC_{POS,ANG}_X_GAIN.

Attached is a nice lock stretch as an example of how good a mode we could get after Swadha tried to tweak up the alignment of OMC and OM3 (the nominal for earlier mode scans was a locked level of 3mA on the DCPD_SUM_OUPUT).

Below are the reference values for calculating the loss

Locked 1.3659 mW  at 1:29:18 UTC for OMC_REFL_A_LF_OUTPUT

Unlocked at 4.25555 mW 1:40:47 UTC for OMC_REFL_A_LF_OUTPUT

2.16955 mA at 1:30:07 UTC for same lock stretch on DCPD_SUM_OUTPUT

(REFL channels are noted as calibrated in mW on OMC.adl screen, think DCPD_SUM is calibrated in mA).

Scan starts 2:05:27 UTC 200s. Using tamplate Feb26_2024_PSL_OMC_scan_coldOM2.xml in /userapps/sqz/h1/Templates/dtt/OMC_SCANS

Second image is the scan we took.

 

Images attached to this comment
victoriaa.xu@LIGO.ORG - 18:28, Monday 26 February 2024 (75981)

Screenshot of OMC LSC locking and relevant signals attached. The OMC LSC capture range is quite small, so likely last week we were trying to lock LSC while not being in range of the LSC error signal. Scanning OMC PZT2 in step sizes of 0.1, looking for the LSC_I error signal into the OMC LSC servo, then locking the LSC servo loop, worked.

Images attached to this comment
jennifer.wright@LIGO.ORG - 10:29, Tuesday 27 February 2024 (75993)

Just to clarify scan started at 2:05:27 UTC on 2024-02-27 and template runs two 100s scans up in voltage.

H1 TCS
camilla.compton@LIGO.ORG - posted 10:45, Thursday 22 February 2024 - last comment - 10:13, Tuesday 27 February 2024(75928)
CO2X tripped off 22:00UTC yesterday by setting it's temperature too low, back on now.

With the CDS and Beckhoff work yesterday some large numbers got into the CHILLER_SET_POINT filters which requested a low setpoint for CO2X, the laser faulted and turned of when it got to 15degC at 22:06UTC yesterday. This morning I turned it back on and set H1:TCS-ITMY_CO2_CHILLER_SERVO_GAIN_GAIN from 1 to 0 (as in alog 75715) to stop any feedback form the laser being off gong to the chiller).

TJ and I plan to make sure the H1:TCS-ITMX_CO2_PZT_SERVO_GAIN_SW2R is correctly turned on and off in the Guardian code when the CO2 lasers are taken DOWN to avoid this in future.

Comments related to this report
camilla.compton@LIGO.ORG - 10:13, Tuesday 27 February 2024 (75991)

This is correctly turned off in DOWN but the Beckhoff work on this day changed the settings (plot) and we did not rerun the DOWN state. TJ has added a detector to TCS_ITM{X,Y}_CO2 guardians to rerun DOWN state if H1:TCS-ITMX_CO2_PZT_SERVO_GAIN_OUTPUT goes >1000. Usually below 100 but has an intergrator that can act strangely on Beckhoff channel changes. Tested and loaded in both X and Y.

Images attached to this comment
H1 TCS (CDS)
camilla.compton@LIGO.ORG - posted 14:14, Friday 16 February 2024 - last comment - 13:02, Tuesday 27 February 2024(75876)
Changed Wiring in HWS Camera External Power Supply - Have proposed coupling of combs!

Fil, Luis, Camilla. FRS 26828#c11 Previous troubleshooting in alog 7495174750.

Today Fil went onto the HWS table and verified the HWS cameras are connected straight to the external power supply (photos in FRS26828#c8) via cables connected together. The HWS camera fiber CLink box is powered by the HWS breakout chassis D1200934 (originally designed to power cameras too).

Fil theorized that the HWS grounding is due to the power supply to fiber CLink connection grounding the cameras! With a multi-meter, we verified the HWS camera is grounded to the table when the CLink power cable is connected but when we disconnect CLink power, the camera is not grounded to the table (has Kapton tape and plastic bolts).

Step 1 (Completed): I removed the grounding cables from the HWS external power supply as Luis suggested in FRS 26828#c11, to avoid us holding the cameras negative terminal at mains ground. Photo of current connections attached. 

Step 2 (Planned): After verifying step 1 doesn't effect the combs (need overnight "quiet" data), Fil will make a new cable to power both the camera and CLink's via the external supply (14V is fine for both). 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:42, Wednesday 21 February 2024 (75911)

Preet, Camilla

It appears changing the cabling of the external power supply has removed/reduced the comb: compare blue trace before cabling change to red trace afterwards change in attached plot.

To verify this, we undid the cabling change, back to the old configuration this morning at 9:30am and turned the camera back to 7Hz each, we'll later check if the comb reappears. 

Images attached to this comment
camilla.compton@LIGO.ORG - 10:58, Tuesday 27 February 2024 (75996)CDS, DetChar, PEM

Preet, Camilla, FRS 26828 

With the swap back to the old cabling, we confirmed that the comb came back, see Feb 22nd data in attached plot. This morning I changed the cabling back to the "quiet" version, photo attached. 

Question for others, is the old cabling expected to push noise into DARM or is there an unknown coupling? Tagging PEM, CDS, Detchar. 

Images attached to this comment
luis.sanchez@LIGO.ORG - 13:02, Tuesday 27 February 2024 (76001)
Have you try using a different power supply or low the voltage to +12v? the camera also works with +12v and the rcx c-link only required +5v supply.
Displaying reports 12701-12720 of 86500.Go to page Start 632 633 634 635 636 637 638 639 640 End