Accurate, real-time information about the cuttings and mud on the shale shaker can be invaluable.
For example, a distinct change in the shape and size of cuttings can indicate an unexpected change in the sub-surface lithology. Similarly, accurate localization of the mud-front on the shale shaker can help ensure that the shale shakers are operating efficiently. However, the location and operational environment of shakers makes classical system instrumentation difficult and expensive. Computer vision provides a robust and inexpensive sensing modality that can be used to automatically and constantly track the shaker, mud flow and cuttings state.
The video above demonstrates our solids analysis computer vision system. This automated video processing functions by 1) isolating individual particles on the shaker using object detection techniques, 2) tracking the particles over time using a temporal-spatial-feature tracking algorithm, and,3) measuring the particle sizes, shapes and velocities using image morphology techniques. The resulting distributions of particle features (e.g., size, shape, velocity, eccentricities) form a statistical distribution (histograms below the image). These histograms can be used to flag changes in these distributions, and bring that information immediately to the attention of the mud-logger or driller, significantly increasing drilling safety and efficiency, and helping provide better understanding of the lithology at the bottom of the well-bore.
Mud Front Tracking
One of the largest day-rate expenses on a rig is the cost of drilling mud, and mud can be easily wasted if it is allowed to flow from the shale shakers overboard. Proper setting of the flow rate and choice of screen size is complicated since the ideal parameters change with various mud and well parameters. Measuring the location of the fluid front would enable automated shaker monitoring, but developing classical fluid-front monitoring instrumentation has been problematic. Since the fluid front can be identified visually, computer vision techniques can be applied to automate this processing. The video above shows an example frame of a demonstration video showing how the fluid front can be automatically identified in a video of the shale shakers. A camera ordinarily measures object positions in units of pixels, which need to be converted to real-world units (e.g., inches) in order to be useful for automated control. This example video shows how it is possible to utilize knowledge of the camera parameters and location to infer real-world coordinates (the fluid front location) from the raw camera data.