- By Roger Larson
- April 29, 2021
- Seeq Corporation
Solar electrical generating facilities are putting responsive analytics to use so they can improve efficiency and optimize maintenance.
Installations of utility-scale solar electrical generation facilities continue at a strong pace worldwide, creating challenges for companies managing multiple sites distributed over a wide geographic area. There are usually many similarities among the sites, but there are also differences in the underlying technologies and local performance characteristics.
A common need for any solar operator is to understand and optimize the performance of their systems. Many end users resort to a combination of manual activities to obtain data, process it in spreadsheets, and create rudimentary reports. However, these same users recognize the inefficiencies of this approach and are seeking a better solution.
This case study describes how a solar operator teamed with a system integrator to implement Seeq’s advanced analytics software and deploy it throughout the organization. Choosing the right software product was crucial to ensure connectivity with a diverse range of sites and systems, suitability for user-friendly analysis of the available process data, and economical availability to all who would benefit.
Assembling the team
DEPCOM Power, based in Scottsdale AZ, is committed to providing development, engineering, construction, operations, and maintenance services for utility-scale photovoltaic (PV) solar power plants (Figure 1). With deep technical knowledge and experience, the DEPCOM team is always searching for the best available solutions.
Vertech, a systems integrator (SI) based in Phoenix AZ, had already worked with DEPCOM on previous projects, using their automation and information technologies expertise to provide the required services.
Seeq is a software company headquartered in Seattle WA, with solutions for delivering advanced analytics based on any type of process data. Seeq’s helps companies derive valuable insight from the increasing amount of available raw data to achieve insights for improving business metrics.
Together, DEPCOM and Vertech created an approach for implementing Seeq so they could transition from traditional and manual performance analysis methods to a scalable state-of-the-art solution, with a goal of realizing significant operating savings.
A wealth of data
Each solar site is the source of numerous data points. These input and output data points are monitored and controlled by a variety of digital monitoring and automation systems to operate the equipment making up the solar PV generation process.
Programmable logic controllers (PLCs) are a type of general automation platform controlling many types of generation and balance-of-plant utility equipment. Most sites also have a supervisory control and data acquisition (SCADA) platform overseeing all the subsystems.
There are also more industry-specific smart systems, such as:
Weather monitoring stations
Each PV solar power plant operates hundreds of PV panels. The weather conditions and sun location are monitored so the panels can be positioned for maximum efficiency. Power controllers and inverters convert solar energy into electrical power. Soiling stations help operators understand when panels should be cleaned. All of these and other components are interconnected with controllers and a SCADA system for performing direct operation, but there is a wealth of data available which can be used to optimize operations.
Analyzing system performance in the context of weather and tracking accuracy, or the component performance of power controllers and inverters, is relatively simple. More advanced analysis could consider how dirty (soiled) the PVs are and when they should be cleaned to balance operating efficiency against maintenance costs. When data from all sources is available in a consolidated analytical system, it then becomes possible to create otherwise unexpected insights about how various sources are interrelated, and how overall operations could be improved.
For many years, the DEPCOM performance engineering team had relied on manually keying or importing data based on availability and perceived importance. The SCADA system could supply “live” process data, and associated historian software was capable of supplying logged time-series data. Obtaining the data evolved into a time-consuming chore, delaying the analysis task and introducing an opportunity for errors.
Because most every available point was already integrated into some form of digital system, the team knew it would be more efficient to tap into the networks and communications connections directly and automatically. The challenge was finding the right advanced analytics software.
From solar cells to spreadsheet cells
The DEPCOM performance engineering team had already invested many years developing their system analysis methods in accordance to industry best practices. These consisted primarily of creating, editing, and viewing PC-based spreadsheets. Once the necessary live and historical data was added to the spreadsheets, the analysis could begin.
Using spreadsheets for analyzing data is a common—and burdensome—industry practice. At first glance, spreadsheets seem like the right tool for organizing data, and there are some available analytical functions. However, spreadsheets become enormous, resulting in long data loading times, sluggish responses, and increased possibility of crashes.
For all their capabilities, spreadsheets are not easy for everyone to work with, particularly when shared among group members. One wrong click can disrupt an entire calculation. Also, a spreadsheet-based system requires the users to be experienced in obtaining the data, using spreadsheets, understanding the process to be analyzed, and knowledgeable about how to apply various mathematical and analytical methods.
This is a tall order, and the DEPCOM performance engineering team wanted an efficient, best in class technology for their customers.
Choosing an advanced analytics platform
With their extensive experience around the solar PV equipment, available data, and existing analytical methods, the DEPCOM performance engineering team already had a significant advantage. By connecting with Vertech they were able to incorporate an SIs understanding of the existing digital systems and platforms.
Together, the team identified the features required for an advanced analytics software platform:
Ability to connect to varied standalone or SCADA-based historians
Works well with time-series process signal historical data
Cost-effective for purchase and end-user licenses
Easy to install, run, and maintain
Scalable for large enterprise applications
User-friendly to configure and run analyses
Flexibility to apply different analyses on varying assets
Easy to create and share reports and other analyses
Seeq software was identified as meeting or exceeding all the above criteria. It can operate with just about any SCADA or historian found in commercial, industrial, or utility applications. In particular, Seeq specializes in working with any type of data, whether sourced on-premises or in the cloud. It connects directly to the sources without requiring extraction, transformation, loading, or replication—preserving the single source of data truth.
The Seeq deployment and licensing model provides many options and installs often take less than an hour. The core Seeq Server product can be run on a desktop PC or server. For additional scalability, reliability, and capacity, it can be deployed on dedicated servers, server clusters, or virtual machines, either on-premise or in the cloud.
User applications include Seeq Workbench and Seeq Organizer, both browser-based applications. This makes it easy for users to access the system, and straightforward to administrate and manage user logon and licensing.
Seeq is designed with user-friendliness in mind, so end user experts can quickly apply their knowledge and not be constrained by connectivity and deployment details. The environment supports all phases of analytics, and lets users quickly obtain actionable insights.
SCADA and historian
DEPCOM already had performed several projects using Inductive Automation’s Ignition SCADA product, and even had some concurrent projects working with Vertech using Ignition.
Some local sites were using SQL Server as a historian, but the historian software was already under review due to the important role it served in relation to analytical software. Moving forward, the team selected the Canary Labs historian as the best product to meet their needs. The Canary Labs historian is optimized as a true time-series database for industrial automation, as opposed to traditional SQL databases which are relational.
Canary offers built-in connectivity to Ignition, and sufficient performance when obtaining data from SQL Server. For industrial time-series historians, performance is not simply a question of how fast points are logged because the points must include values as well as timestamps and quality indications. Depending on the number of points and the logging frequency, it is important for the historian to highly compress storage while using algorithms to avoid unacceptable data loss. Data must be organized in asset models for usability and be rapidly accessible as needed. DEPCOM needed a scalable historian that was simple to administrate, while providing scalable and virtually limitless storage with complete security and data integrity—and Canary Labs met those requirements.
For the first sites of interest, DEPCOM and Vertech decided on a “hub” and “spoke” architecture (Figure 2). The “hub” represented the single centralized enterprise installation, while each “spoke” was one of many local installations.
Each local site installation consisted of one Ignition server system gathering all data, and typically using base Ignition historians with a SQL local server. The project began this way, with the intention of converting each SQL instance to Canary.
The centralized enterprise installation included a new Canary enterprise server gathering historical data from all sites, complemented with a Seeq Server as the analytical core. Future plans are to add an Inductive enterprise server to facilitate better company-wide visualization.
At each local site, a Canary Labs Chirp Ignition 3rd-party module with splitter capability connects each Ignition tag to the local historian, and also remotely to the enterprise historian.
Seeq Server is hosted at the enterprise level on a single server specified by Vertech and procured by DEPCOM. Vertech was able to perform the installation in minutes. The only initial configuration required for Seeq Server was adding a network data connection to the Canary enterprise server using OPC-HDA. For this installation, Seeq does not connect to any local historian, only to the centralized enterprise server.
Seeq works by indexing the available raw data and making it available in the analysis environment using asset trees.
Defining the asset tree model
A unified asset tree model structures the data as obtained from the physical world for easy identification, initial use, and re-use in the digital domain (Figure 4). Any given field asset type, such as a weather station or an inverter, has certain data points associated with it and is installed at a given site. The hierarchy provides a modular way of assigning the available data.
While much of the data is obtained from actual field devices via the OPC-HDA source, there are also constant and placeholder signals which are sometimes required and are obtained from smaller monthly data entry spreadsheets supplied by operators. Seeq is able to incorporate these intermittent and external data sources using its DirectoryWatch module.
The Seeq Asset Tree is used to define asset models among similar assets, also enabling analysis scaling. Users typically define these asset models to be as flat as possible, so they can take advantage of the easy “asset-swapping” feature empowered using regular expressions (regex) in conjunction with point names. An original analysis or a copy can be “swapped” and applied to other similar assets with similarly named tags. This is aided by using regex to match data point name patterns in the original source historian, and then mapping them to the target asset models, yielding unique tags per asset that have equivalents in all other assets.
Users can perform all of their initial analysis developments on one or a few prototypical assets until they are satisfied with the method, then scale it out to many other assets. Even as the analysis is further improved, it can be curated and applied as needed, minimizing required effort.
The Seeq Workbench browser-based application is used to perform a visual and intuitive investigation of time-series data using trends, charts, grids, and other tools. The Seeq Organizer browser-based application is used to assemble these analyses into reports, presentations, and documents.
Analysis in action
With the groundwork in place, the team was ready to develop analyses and dashboards to supersede and move beyond the previous spreadsheet formats (Figure 5).
Figure 5: These Seeq screen captures show actual examples of data cleansing, asset swapping, trending, and KPI reports.
A general first step is usually signal data cleansing, such as smoothing the raw data with a low pass filter. This streamlines the number of effective datapoints and subsequently reduces computational time, all while maintaining the proper level of fidelity. Another similar activity is to establish signal data exclusions, which serve to remove extreme values falling outside normal or expected ranges, so the resulting computations are not adversely affected.
Seeq includes a Journaling feature, so users can document every analysis step-by-step, which fosters collaboration, understanding, and sharing between teams. This also allows developers to neatly organize Seeq worksheets by taking multiple steps of an analysis and condensing it into clickable links that show the desired analysis step, where only the current analysis step is ever visible at any given time.
Solar energy systems are often evaluated in terms of their P50 value, which represents an expected performance level based on years of studying solar radiation and weather data. The analytical system was configured to let users enter the relevant P50 budget values for use by other calculations. Industry-specific calculations using this information for daily and monthly operation included plane of array (POA) and global horizontal irradiation (GHI). Seeq allows inputting P50 budget numbers via DirectoryWatch constants, effectively treating them as continuous signals for comparison against the aggregated values coming from field devices.
Instantaneous generation is monitored and analyzed to provide measured generation values on a daily and monthly basis. Similarly, meteorological and system data including air temperatures, module temperatures, and average wind speed is trended and made available for other calculations.
Based on much of the analyzed data noted above, the Seeq software can perform relevant American Society of Testing and Materials (ASTM) calculations to indicate the energy and performance index calculations.
From a maintenance standpoint, another calculation analyzes the soiling level of the PV collectors, which can be used to determine the optimum cleaning intervals. As an example of leveraging asset-swapping, each of the many inverters are tracked for online operational time to identify availability. This information guides maintenance efforts and can help identify issues trending to potential problems.
Users can delve into the details for any of these analyses, or they can investigate other areas to develop insights. Most commonly, the calculations are used to support dashboard displays by providing quick overviews of daily metrics, monthly metrics, and measured generation information.
These dashboard displays utilize the scorecard tool found in Seeq. The scorecard acts as a table which displays the aggregation of key-performance indicators (KPIs). The table is arranged with a scorecard KPI metric at the start of each row, and the condition of that aggregation (e.g. time) shown on the header. Possible aggregations include: average, minimum, median, maximum, percentile, standard deviation, sum, totalize (integration over time), value at start, and value at end. Customized aggregations are possible via the function tool, which can return an aggregate for the scorecard.
These aggregations of KPIs can be performed over any given condition. For example, the “Monthly POA Insolation” scorecard metric is the integral of “POA Irradiance” over hours, aggregated by a month condition. Similarly, the “Daily Module Surface Temperature” is the average of “Module Surface Temperature” aggregated by day.
Generating an hourly, daily, and monthly dashboard of this data allows for DEPCOM to provide quick and efficient reports to their solar site clients. Seeq can speed up this process considerably, freeing up DEPCOM performance engineers to work on other tasks.
Any process industry, manufacturer, or utility can benefit from data analytics. Solar PV power plants as operated by DEPCOM are good candidates as they often already have PLC, SCADA, smart device, and networking infrastructure in place which they can leverage, and they also typically operate multiple similar sites. User-friendly software like Seeq Server, Workbench, and Organizer are readily deployed locally, at the enterprise level, or in hybrid combinations to help users apply their expertise and improve operations.
Did you enjoy this great article?
Check out our free e-newsletters to read more great articles..Subscribe