DOI QR코드

DOI QR Code

Development of a Data Reduction Algorithm for Optical Wide Field Patrol (OWL) II: Improving Measurement of Lengths of Detected Streaks

  • Received : 2016.08.04
  • Accepted : 2016.08.24
  • Published : 2016.09.15

Abstract

As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.

Keywords

1. INTRODUCTION

Optical wide-field patrol (OWL) has a detector subsystem consisting of a CCD camera, de-rotator, filter wheel, chopper and time tagger (Figs. 1 and 2). This mechanism makes it possible to produce multiple data points from observations of artificial satellites or space debris which moves faster than the sidereal rate of the background celestial objects. In the previous paper (Park et al. 2013), the development of a data reduction algorithm for this type of data was presented in detail. In this paper, the algorithm is reinvestigated and improved.

Fig. 1.OWL detector subsystem. The time tagger is not shown in this picture.

Fig. 2.Design of detector subsystem. The chopper rotates to separate the trail of a moving object into many streaks. When the chopper blade hits the photodiode, the open/close status of the CCD window located at the opposite side of the sensor is detected and recorded as time log data.

In Section 2, the previous reduction algorithm is summarized and a problem in the calculation of the streak length is raised. In Section 3, several similar studies are introduced. In Section 4, a method based on using the differential convolution mask is presented and Section 5 summarizes the paper.

 

2. OWL DATA REDUCTION

2.1 Reduction Procedure

Fig. 3 is a sample of an OWL observation image. The reduction procedure in brief is as follows:

Fig. 3.OWL test observation image. This is already introduced in Park et al. (2013).

2.2 Streak Detection and Length Determination Using SExtractor

Photometry output using SExtractor includes various parameters such as position, magnitude, ellipticity, star/galaxy classification and coordinate of the rectangular region containing the detected object. Streak collection is performed using these parameters (ellipticity and star/galaxy classification). The collected streaks are aligned to a Legendre polynomial, usually as a straight line. Because it is clear that the length of a streak is proportional to the exposure time segment between obscurations by rotating chopper blades, defining the length of the streak is essential. In Park et al. (2013), this length value is determined using the coordinate values of the rectangular region containing the streak (Fig. 4) as follows:

Fig. 4.Streak length determination using shape parameters of a streak from SExtractor.

2.3 Combining Positional Data with Time Log Data

The chopper starts to rotate from a steady (stopped) state and the angular speed accelerates, making the length of the streak long at first but decreasing with time. The ratio of the line length (L) and the time duration segments (D) recorded using a time tagger is thus assumed to remain unchanged.

In other words,

Adjusting the offset of the sequences of L and D, it is possible to find the best offset yielding the minimal value. The streak positions and the time log data are then matched.

2.4 Ambiguity of Length Determination

This reduction procedure requires an accurate streak length determination. But the streak length calculated simply from the diagonal of the rectangular region containing the streak cannot be measured directly. The pixel brightness begins to rise from the background when the chopper blade starts to open the CCD window, becomes half of the top value when the CCD window is half open (just when the blade edge hits the time tagger photodiode sensor at the opposite side of the CCD window), and reaches the top when the window is completely open. To measure the exact length of a streak, it is important to locate the endpoints of the streak accurately.

 

3. RELATED STUDIES

In the field of general image processing, just beyond the area of observational astronomy, there are various methods of feature detection based on mathematical calculations. Some of these methods are also adopted and studied for astronomical image analyses, including characterization of non star-like, elongated objects such as streaks.

3.1 Using “Tepui” Function as PSF

In Abad et al. (2004), an analytic function referred to as the “Tepui” function, Eq. (4) is used as a model profile of observed visual binary stars. Fig. 5 is a sample streak and Fig. 6 shows its “Tepui” fitting result. Abad et al. (2004) noted that the parameter “c” in Eq. (4) is related to the length of the profile.

Fig. 5.A sample streak image for “Tepui” function fitting test.

Fig. 6.Result of “Tepui” function fitting test using the sample streak. The connected red points are the central brightest pixels of the sample streak image along the X-direction. Fitting is performed using the Levenberg-Marquardt algorithm (Press et al. 2005).

Montojo et al. (2011) also suggested that streaks of artificial satellites fit to this “Tepui” function well along the X-direction, while the profile of the Y-direction can be fitted to a Lorentzian function: Eq. (5).

However, according to Abad et al. (2004), there must be additional fitting parameters to be calculated because the real observed streaks have rotation, X and Y shifts, and background gradients. When the Lorentzian profile along the Y-direction is used together, the number of parameters then reaches 16 (+ Lorentzian parameters), which makes it very difficult to perform this kind of nonlinear fitting.

3.2 Harris Corner Detector

In 1988, Harris & Stephens (1988) suggested an edge detection algorithm based on image gradient. The underlying idea of this algorithm is that when a searching window with a box shape sweeps on an image, the sum of the pixel values inside the box does not change in a “flat” region in any direction, nor in an “edge” region along the edge direction, but changes significantly in all directions in the “corner” region. This idea can be implemented and parameterized easily using image pixel gradient. The Gaussian weighted (using ω(i,j)) sum of squared differences (SSD) within the inspection window centered on (x, y) position is given by:

It can be expressed using the pixel gradients (Ix, Iy) and Taylor series:

Hence, the squared sum is:

Where,

Finally, the response (R) is:

Where, α and β are the eigenvalues of the matrix A and k is an adjustable constant (typically between 0.1 and 0.5). Fwelling & Sease (2014) used this to analyze observation images of resident space objects (RSOs) such as geostationary orbit satellites (GEO satellites) to identify and characterize the features of background star streaks. In this analysis, the background star streaks are transformed to corresponding pairs of endpoints. But application to an OWL observation image (Fig. 7) revealed that there is a problem of determining the appropriate threshold on the resultant response map.

Fig. 7.Result of applying Harris corner detector. The input image (left). The calculated response map (middle). An appropriate threshold is applied manually on the map (right).

3.3 Phase Congruency

While Harris & Stephens (1988) tried to detect the features of objects using the gradient on the image plane, Kovesi (1999) presented another method using “phase congruency”. In this approach, regarding the image pattern as a linear sum of multiple signals, the phases (“ϕ” in Eq. (11)) of these signals coincide at the positions of edges and corners. Using this, corners and edges of objects in a general image can be identified (Fig. 8). Flewelling & Sease (2014) applied this method to the characterization of background streaks. However, calculating the phase congruency is not easy. Eq. (11) is the formula.

Fig. 8.An example of applying the phase congruency method. The input image (left). The edges and the corners of the input images extracted (right).

To calculate this for just a single image, several fast Fourier transform (FFT) and inverse FFTs are needed, resulting in excessive calculation time. Additionally, this calculation requires a specific study of the noise behavior of OWL observation images and parameter optimization, which can vary from image to image. Fig. 9 is a 3-dimensional plot of the phase congruency calculation result of Fig. 1. Because the noise model and parameter optimization are not complete, the endpoints of the streaks are not clearly defined.

Fig. 9.3-dimensional plot of the result of applying phase congruency. The input image is the same as Fig. 7

 

4. REVISED REDUCTION ALGORITHM

4.1 “Differential” Convolution Mask

Prewitt (1970) presented a method of edge detection using a 3×3 convolution mask (Fig. 10(a)). The result of image convolution using this mask pattern as a convolution kernel is the horizontal image gradient of an input image because it calculates the difference of both sides of the central pixel. We hereafter refer to this mask pattern as the “differential” mask. Originally, Prewitt (1970) suggested using this mask in horizontal and vertical directions and calculating the Pythagorean root mean square to extract the feature pattern of objects in an image. But our purpose is not exactly the same and we can measure the position angle of the aligned streaks; the method (described in Section 4.2) is somewhat different. An example of the result is illustrated in Fig. 11. The input image (Fig. 11(a)) has streaks tilted 12.4˚ clockwise on it. The convolution mask (Figs. 10(a) and 10(b)) hence should also be rotated (Figs. 10(c) and 10(d)). After convolving using this rotated, bicubic-interpolated kernel, the rising and the declining positions of the streaks appear (Fig. 12(b)). Taking the absolute values of the convolved images shows us the endpoints of the streaks (Fig. 12(c)), where the absolute values of the image gradients are the most significant when the edges of the chopper blades obscure half of the CCD window as explained in Section 2.4.

Fig. 10.The Prewitt mask (a), its visualization (b), rotated 12.4˚ clockwise and bicubic interpolated (c) and its visualization (d).

Fig. 11.The sample image of streaks (a), convolved image using the rotated kernel (b), the absolute values of the convolved image (c).

Fig. 12.The artificial streak images. 4 sets of streaks, 16 streaks for each set are created with rotation angles of 0°, 6°, 12°, 18°, 24°, 30°, 36°, 42°, 48°, 54°, 60°, 66°, 72°, 78°, 84° and 90° clockwise from the horizontal axis and Υ values of 1.0 (a) (top left set), 2.0 (b) (top right), 3.0 (c) (bottom left), and 4.0 (d) (bottom right).

4.2 Measuring the Lengths of Streaks and Making the Final Reduction Result

The procedure of measuring the lengths of streaks is as follows:

Because the newly identified endpoints are the direct observed points which the raw time log records correspond to, the number of the resultant data points is two times that of the case of Park et al. (2013), which considers only the center positions of the streaks.

4.3 Comparison with the Old Method via Simulation

To compare the behaviors of the old and the new methods of streak length measurement, a brief simulation is performed using artificially created streak images. The streak model function is a “Tepui” function along the major axis and a Lorentzian profile along the minor axis as explained in Section 2.1, similar to the model in Montojo et al. (2011). The input parameters of the simulation are summarized in Table 1 and Fig. 12 shows images created using these parameters. The lengths of these streaks are calculated using the diagonal method of Park et al. (2013) and the convolution method. The results are shown in Fig. 13 for each Υ value. In Fig. 13, the streaks length values from the diagonal method change with the Υ values because as the widths of the streaks change, the shapes and sizes of the rectangular boxes containing streaks change, hence the lengths of the diagonals change. But the displacements of the endpoints obtained via convolution are not affected.

Table 1.Simulation parameters

Fig. 13.Results of length measurements. Dashed lines are the values of the diagonal method of Park et al. (2013) and solid lines are those of the convolution method. With varying Υ values of 1.0 (a) (top left set), 2.0 (b) (top right), 3.0 (c) (bottom left) and 4.0 (d) (bottom right), the results from the convolution method remain at the length of 16, which is two times the “c” parameter in Eq. (4), while the results from the diagonal method change.

 

5. SUMMARY

OWL is an automated observation and data acquisition system for fast moving objects such as artificial satellites and space debris near the Earth. It has a chopper system and a time tagger that can generate a large number of data points of the moving targets in a single observation. Park et al. (2013) reported a basic data reduction algorithm for this purpose based on the characteristic that the length of observed streak of a moving target trajectory cut by the chopper and the time durations of the exposures are proportional to each other. However, although an accurate measuring method of the lengths of the streaks is essential for this algorithm, the method used in the previous study was not accurate enough. In this paper, several related studies about feature detection in the field of image processing are examined and applied to the image data from OWL observation. The results demonstrate that the method of convolution using the “differential” mask pattern presented by Prewitt (1970) is a good solution. Using this, more exact results of streak length measurements are enabled, and the number of data points is increased up to two times enhancing the efficiency of the observation significantly.

References

  1. Abad C, Docobo JA, Lanchares V, Lahull JF, Abelleira, et al., Reduction of CCD observation of visual binaries using the "Tepui" function as PSF, Astron. Astrophys. 416, 811-814 (2004). http://dx.doi.org/10.1051/0004-6361:20031715
  2. Bertin E, SExtractor v2.5 User's manual, Institut d'Astrophysique and Observatoire de Paris (2006).
  3. Bertin E, Arnouts S, SExtractor: software for source extraction, Astron. Astrophys. Suppl. Ser. 117, 393-404 (1996). http://dx.doi.org/10.1051/aas:1996164
  4. Calabretta MR, Greisen EW, Representations of celestial coordinates in FITS, Astron. Astrophys. 395, 1077-1122 (2002). http://dx.doi.org/10.1051/0004-6361:20021327
  5. Flewelling B, Sease B, Computer vision techniques applied to space object detect, track, ID, and characterize, Proceedings of the Advanced Maui Optical Space Surveillance Technologies Conference, Maui, HI, 9-12 Sep 2014.
  6. Greisen EW, Calabretta MR, Representations of world coordinates in FITS, Astron. Astrophys. 395, 1061-1075 (2002). http://dx.doi.org/10.1051/0004-6361:20021326
  7. Harris C, Stephens M, A combined corner and edge detector, Proceedings of 4th Alvey Vision Conference, 147-151 (1988). http://dx.doi.org/10.5244/c.2.23
  8. Kovesi P, Image feature detection from phase congruency, Videre: J. Comput. Vis. Res. 1, 1-26 (1999).
  9. Montojo FJ, López Moratalla T, Abad C, Astrometric positioning and orbit determination of geostationary satellites, Adv. Space Res. 47, 1043-1053 (2011). http://dx.doi.org/10.1016/j.asr.2010.11.025
  10. Park SY, Keum KH, Lee SH, Jin H, Park YS, et al., Development of a data reduction algorithm for Optical Wide Field Patrol, J. Astron. Space Sci. 30, 193-206 (2013). http://dx.doi.org/10.5140/JASS.2013.30.3.193
  11. Press WH, Teukolsky SA, VetterlingWT, Flannery BP, Numerical recipes in C: the art of scientific computing, second edition (Cambridge University Press, Cambridge, 2005), 408-412.
  12. Prewitt JMS, Object enhancement and extraction, picture processing and psychopictorics, eds. Lipkin BS, Rosenfeld A (Academic Press, New York, 1970), 75-149.

Cited by

  1. Characteristics of Orbit Determination with Short-Arc Observation by an Optical Tracking Network, OWL-Net vol.2018, pp.1687-5974, 2018, https://doi.org/10.1155/2018/2837301