Built First. Built Better. Still Ahead.

They say imitation is the sincerest form of flattery and at Blue Lucy, we’re pleased to see our vision helping shape the direction of the industry.

While others may be working to close the gap, our orchestration engine already features a library of over 600 individually packaged microservices, ready to build into automated workflows that meet the complex and evolving needs of leading media operators, broadcasters, and brands.

This isn’t new territory for us. Blue Lucy has been delivering measurable business value in the field since 2020. In 2025 alone, more than 133 million microservices were executed across our user base representing a 30% increase year over year.

Recent pre-NAB announcements suggest a broader recognition across the market: highly complex platforms that depend on dedicated DevOps support can slow down, rather than accelerate, the delivery of business value.

From the outset, Blue Lucy has taken a different approach combining powerful configurability with genuine ease of use. Our intuitive low-code/no-code workflow builder, paired with a responsive and accessible user experience, enables teams to unlock operational value without the burden of unnecessary complexity.

For the real deal, and a real demo visit Booth W2318 at NAB


Blue Lucy Technology

A deep dive into the platform.

Architecture

The Blue Lucy platform follows a distributed microservices architecture, meaning the overall operational capability is structured as a collection of loosely coupled services. This architecture is robust, resilient and conforms to the separation of concerns paradigm. The singularity of purpose which is a key tenet of separation means the platform is easy to maintain and extend. Equally there are significant opportunities for re-use of components which speeds up our development so that as new business requirements come in – such as a new 3rd party system to connector – we can implement the capability with unparalleled speed.

Overview 

The overall architecture comprises the database and two core Blue Lucy components: the Application Programming Interface (API) and the Workflow Runner (WFR).

The database is the single data repository of the system with the API and WFR being stateless services which allows automated horizontal scaling.  Blue Lucy hosted services are typically deployed in a Highly Available (HA) configuration comprising two API engines and two WFRs arranged behind a load balancer.

The API and WFR 

The API and WFR are containerised services and will run in any compute infrastructure which means that the platform is not only agnostic to the cloud infrastructure environment in which it runs but may also be run ‘on prem’ in any suitable container orchestrator such as Kubernetes. This affords maximum operational flexibility including running cloud-ground hybrid systems – an important capability in an industry in which most systems and media is currently located at the facility. Around 80% of our platform deployments are cloud-ground hybrid. The distributed architecture also supports worldwide operating models for globally distributed business operations.

Microservices

Within the platform orchestration layer there is a further abstraction between the WFR and the microservices which perform the operationally specific functions at run time. The microservices are individual executable components in their own right and separate from the WFR itself. This enables the microservices to be developed independently of the WFR and provides an extra level of safety at run-time. This true microservice architecture gives the Blue Lucy business unparalleled development scale and means that microservices may be developed by third parties. Microservices may be updated or new ones applied to live systems without any downtime or interruption, the new services are simply picked up by the WFR when called.

Our microservices run in the WFR and interact with the platform API and the API of the third-party services directly, enabling real time updates between the platform orchestrator and subsystems. There are currently more than 600 microservices available off the shelf of which approximately 250 are integration connectors to media and business systems.

Alongside the microservices the platform also has a range of plugins which are similar in construct and are equally hot pluggable but are designed for the integration of event driven architectures and are deployed as listeners.  Examples in use might be a plugin which is subscribed to a message queue listening for specific events, or an HTTP listener to extend the platform’s API.

Integrate and extend 

The platform has an open REST API, which is the same API used by the platform’s user interfaces (UI). The API programmatically is supported with embedded documentation created by Swagger and further documentation hosted in the online knowledge portal, Blue Lucy Central, which may be directly accessed within the applications. In addition, a full Software Development Kit (SDK) is available which allows developers to build microservices for the platform.

Available as a package from NuGet, the developer-friendly .NET SDK enables software engineers to code in their preferred IDE, such as Visual Studio, and provides helper features, such as IntelliSense, facilitating rapid and predictable development. The SDK supports rapid learning, with standard methods for accessing data, and provides a safe interface as the commands interact with the Blue Lucy API rather than lower level. The SDK allows developers to use any .NET-compatible library which provides the freedom to integrate any 3rd party component.

Using the SDK is more powerful than simply calling the API, as it utilises the WFR service to perform any 3rd party function or interaction. This has the potential to extend the useful functionality of the platform way beyond the usual media and broadcast systems to drive more business value for operator. The public SDK is the same tool that we use internally for development, so it is proven, robust and is regularly updated. An SDK is also available for Python, and the Python runtime environment is included with the WFR as standard.

Frameworks 

The platform is underpinned by .NET 10, the latest and fastest .NET framework which affords a truly cross-platform, open source, common language run-time environment. .NET 10 affords long term supportability, an excellent security model and is optimised for containerised deployments providing true enterprise level robustness.

For the tech ops and administrators ‘factory’ interface we use the Angular 20 framework from Google and for the production operations ‘hub’ view we use React developed and maintained by Facebook. Both frameworks provide an optimal user experience within the operational use cases. SignalR is used extensively in the user interface to provide real-time status updates as it eliminates the need for polling from the front-end to reduce chatter and provides users with instantaneous operational status updates.

For deployed platform observability we conform to industry standards, including supporting OpenTelemetry, to enable centralised logging, metrics and full tracing. You can even inject your own trace ID to fully integrate with upstream triggers or push the trace ID to downstream recipients to give an incredibly powerful and versatile full operational visibility.

Platform updates 

In development Blue Lucy follows continuous integration principals which drives a modular development approach and ensures robust and reliable operation.

Equally software updates, which are deployed automatically for Blue Lucy managed systems, always follow the continuous delivery paradigm with updates to the platform core services available to customer managed environments.

We are currently making major releases, typically pegged to specific new features or platform capabilities, between six and nine times a year. As the microservices are abstracted from the core these may be released to in production systems as required. In all cases updates may be deployed with zero, or near zero downtime.

Blue Lucy prefers to focus on the operational business value of our products rather than technology, but the overall architecture and our approach to software development is an important aspect of our value proposition. If you would like more information about the Blue Lucy software architecture, we are always happy to talk.


Blue Lucy’s 6 Key Tenets

Modern media operations demand a platform that unites automation, orchestration, and human oversight without compromise. In this post, we explore the six key tenets that underpin Blue Lucy’s technically robust, extensible, and operationally efficient platform built for high-volume, complex workflows across hybrid environments.

1. Combine core capabilities

‘Static’ MAMs without workflow orchestration do not provide the level of automation required for today’s high-volume operations. Equally workflow orchestration platforms that lack a comprehensive MAM component are insufficiently nimble to support the complexity of a modern media operation. Automation platforms that neglect the human operations negate the efficiency they deliver. The Blue Lucy platform combines these core capabilities to deliver maximum efficiency, control, and visibility.

2. Microservices not monolith

The Blue Lucy platform follows a distributed microservices architecture, meaning the overall functional capability is structured as a collection of loosely coupled services. The architecture is robust, resilient and conforms to the separation of concerns paradigm. The singularity of purpose is a key tenet and means the platform is easy to maintain and extend. It is comprised of more than 500 microservices which are orchestrated by the Workflow Runner service., making it the most readily extensible media integration platform.

3. No forking branches

Media is a complex operational business which has seen decades of evolution. There is an inevitable need for bespoke software components in all but the newest of operating models. It is still common for vendors to branch, or fork, the source code to create a version specific to a customer. This approach presents significant risk to operators. The platform has been structured so that Blue Lucy does not manage numerous code branches or build scripts for different customers. We have a single code base with customer specific bespoke microservices.

4. Integration, Integration, Inte…….

The most significant business value and operational gains come from integrated software and services. But operations built on supplier driven ‘ecosystem’ models are closed and are rightly described as vendor lock-in traps. Such approaches tend to work only as long as it commercially suits the members of the vendor cartel. The Blue Lucy platform has been designed to provide the integration layer between systems allowing technology abstraction whilst driving operational cohesion.

5. No code, low code, code

The platform has been designed to be accessible to operators with varying levels of business and technical expertise. It conforms to the ‘no-code’ paradigm and includes a drag-drop-connect-connect-configur e workflow builder, which is atomic, intuitive and means that business analysts can rapidly build and maintain complex operational workflows. In addition to the 500+ microservices the workflow builder allows secure scripting in C# or Python. Lower level, developer, access is provided through the API which is supported by an open SDK – the same SDK we use to develop the microservices.

6. Run anywhere

‘Cloud native’ sounds very modern, but binding an operational capability to the cloud, particularly specific cloud providers, is cloud dependent, the antithesis of the ethos behind the service orientated model. Equally media operations simply do not fit into an on-prem (ground) OR cloud model. Blue Lucy core services are containerised, and infrastructure agnostic enabling a controlled migration to the cloud or more typically a ‘hybrid’ cloud-ground deployment and operating model.

 

By combining extensible microservices, intuitive workflow orchestration, and hybrid deployment flexibility, Blue Lucy delivers a platform that empowers media operators to run complex workflows with control, visibility, and efficiency -turning technical sophistication into real-world operational impact.


Has Video outgrown your DAM?

Digital Asset Management systems sit at the heart of most marcoms operations. They centralise content, organise it, and make it discoverable. Integrated with the wider MarTech stack, DAM support governance and drive efficiency. But video has changed the brief.

Video is no longer an occasional campaign asset. It is now the dominant content format across marketing, product, internal communications, and customer engagement. And as volumes grow, so do the operational pressures.

The issue is not whether your DAM can store video.
The issue is whether your teams can discover, reuse, adapt, and govern it efficiently at scale.

Because when discovery slows, reuse drops.
And when reuse drops, costs rise – often without anyone noticing.


Video Is ‘Just Another File’ – Until It Isn’t

At a basic level, a video file is simply another digital asset. A DAM can store it, catalogue it, and apply metadata to it. But video carries characteristics that fundamentally change how it needs to be managed:

  • Format complexity. Video comes in a wide range of encoded formats (CODECs), each with different configurations – frame rates, encoding structures, audio arrangements. These aren’t cosmetic differences; they directly affect compatibility, quality, and approach to distribution.
  • File size and accessibility. Professional video files are large and often not web-browser compatible. That makes previewing, streaming, and collaboration harder within systems designed primarily for static media.
  • Time-based structure. Unlike images, video unfolds over time. Metadata doesn’t just apply to the whole asset – it applies to specific periods within it.
  • Localisation and variants. Subtitles, audio stems, regulatory edits, regional variations – these are related and often interdependent components, not just new versions of the same file.
  • Derivative creation. Social cutdowns, vertical edits, different durations – all need to maintain lineage back to the master asset to avoid duplication and rights infringements.
  • Ongoing editing cycles. Video assets are routinely adapted long after creation or first publication. Their lifecycle is longer, dynamic and continuous.

And perhaps most importantly, Creatives and marketers are rarely searching for a file.
They are searching for a moment – a product shot, a quote, a scene, a reaction. That distinction is where traditional DAM models begin to strain.

Finding the Right Moment – Not Just the Right Asset

Metadata has always powered discovery inside DAM systems. This object-based metadata – be it campaign, product, spokesperson, usage rights – works well when assets are static.

But video exists in two dimensions:

  • Catalogue metadata – information about the asset as a whole.
  • Temporal metadata – information tied to specific time periods within the asset.

A tag might say “Product X is in this asset,” but it won’t say whether that appears in the first five seconds or the last thirty. It won’t tell you if the segment you want to use is already in use elsewhere, or the rights have expired. That lack of clarity increases risk and kills efficiency.

At a small operational scale, teams can compensate with knowledge (memory), spreadsheets, and manual review.

At enterprise scale – across regions, agencies, languages, and campaigns – that approach quickly breaks down.

When discovery doesn’t deliver, teams instinctively create their own workarounds: local edits, shared folders, private versions – bypassing the DAM because it doesn’t give them what they need when they need it. That behaviour isn’t just inefficient, it erodes governance, inflates production costs, and reduces RoI from existing content.ntent.

 

The Hidden Cost of “Making It Work”

Most modern DAM platforms support video in some form. Many do so capably within the limits of their original design. But “supporting” video often means adapting workflows around a model designed for static assets.

That adaptation typically looks like:

  • Additional tools bolted on around the DAM
  • Manual reformatting and distribution processes
  • Workarounds for preview and playback
  • Fragmented metadata across systems
  • Disconnected rights tracking

Individually, these compromises feel manageable.
Collectively, they create friction — and friction increases exponentially as content volumes grow.

 

Managing Video Requires a Shift in Perspective

The real question isn’t: “Can our DAM store video?” It’s: “Are we managing video on its own terms?”

What’s emerging is not a rejection of DAM, but a more nuanced ecosystem:

  • DAM remains essential for governance, brand control, and enterprise-wide visibility.
  • Video-native systems handle time-based metadata, format complexity, version control, and high-volume processing.
  • Integration ensures both operate cohesively rather than competitively.

Savvy teams are not looking for a monolithic “silver bullet.” They are rethinking their architecture so that each specialised system – DAM, video indexer, transcoder, rights engine – contributes a distinct capability. The task then becomes enabling systems to collaborate, not forcing one to do everything. That mindset separates high-performing teams from those stuck patching processes.

 

Lessons from Media & Entertainment

These challenges are not new.  The Media & Entertainment sector has been solving them since the late 1990s through Media Asset Management (MAM) systems. For broadcasters’ manual processes were never viable.

Operating efficiently required:

  • Structured, time-aware metadata
  • High levels of automation
  • Tight integration between production and business systems
  • Clear orchestration across ingest, edit, versioning, and distribution

As corporate video demand begins to reach broadcast volumes, marcoms teams are encountering similar pressures, often without the infrastructure on which professional media organisations rely.

 

Automation Is No Longer Optional

With content demand chains growing exponentially, manual operations are becoming infeasible – even for mid-sized With marcoms content demand chains growing rapidly, manual operations are becoming infeasible – even for mid-sized teams.

Video management at scale requires orchestration:

  • Automated transcoding into multiple formats
  • Structured version control
  • Omni-channel distribution
  • Integrated rights and compliance management

Adjacent to automation is the need to reduce friction between systems. Modern media systems such as the Blue Lucy platform are built with structured integration frameworks designed to connect production tools, DAMs, and business systems efficiently.

Because the real risk isn’t just storage capacity.
It’s operational complexity, and the erosion of value from content you’ve already invested in creating.

 

The Real Challenge

Video is now the dominant marketing medium. Storing it is easy. Managing it intelligently – discovering moments, reusing content, coordinating derivatives, and maintaining governance at scale – is the real challenge.

Organisations that recognise this shift early are building integrated, automated video operations designed for growth.
Those that continue adapting static systems to dynamic media will find the friction, and the cost, only increases over time.

 


The AI Wild West, and Why it Needs a Sheriff

AI Is Scaling Faster Than Governance – And That’s a Risk

AI adoption hasn’t rolled out through neat transformation programmes. It has spread organically, driven by teams trying to move faster. It’s already embedded across newsrooms, marketing departments, communications teams, HR, legal and strategy functions. Often informally, and often without central oversight.

A producer indexes archive footage using an AI tool. A marketing team analyses sentiment. An editor runs a clip through a model to check for profanity.

Each action feels efficient, helpful, low risk. But collectively, they create something most organisations aren’t prepared for: AI embedded in core workflows without visibility, control or traceability.

Where did the data go? Which model was used? Was the output reviewed? Were any rights unintentionally waived in the process?

In many cases, no one has a complete picture. AI hasn’t outpaced governance because organisations are careless. It has outpaced governance because the tools are frictionless – and governance isn’t.

Reputational Risk Now Moves at Machine Speed

The reputational equation has fundamentally changed.

One hallucinated output. One biased summary. One automated decision that shouldn’t have been automated.

And it can be published, shared and amplified instantly.

For media organisations in particular, this is high stakes. Publishing misinformation is damaging enough. Publishing it at machine speed, with unclear accountability, compounds the impact. When something goes wrong, the questions are immediate:

Was AI involved? Was it checked? Who approved it?

If those answers aren’t clear and defensible, credibility takes the hit. AI doesn’t just scale productivity. It scales exposure.

Regulation Is Accelerating – and Accountability Is Personal

At the same time, regulation is catching up quickly. New frameworks demand transparency, oversight and traceability in AI-assisted decisions and content production. Executives are accountable, even when outputs are generated by third-party models. Yet many organisations cannot currently evidence which model produced a specific output, what data informed it, what safeguards were applied, or how the output was reviewed before release.

Policies may exist. Ethical principles are often well articulated. But unless they are embedded in operational systems, they don’t provide protection. The gap between intent and implementation is where risk lives.

Speed Versus Safety Is the Wrong Debate

There’s a perception that governance slows innovation. In reality, the absence of governance creates far greater friction later: retractions, investigations, legal exposure and long reputational repair cycles.

If AI was adopted to improve efficiency, reconstructing an audit trail across multiple disconnected tools defeats the purpose. Manually piecing together who used what, where and how is both time-consuming and unreliable.

The smarter approach is to embed governance directly into the workflow – so it happens automatically, not retrospectively. That’s where managed orchestration becomes critical.

Orchestration: Bringing Control to AI at Scale

What organisations need isn’t just access to AI models. They need control over how those models are selected, used and reviewed.

At Blue Lucy, we’ve focused on building that management layer.

Our orchestration engine has direct integration connectors to multiple AI service providers and platforms allowing millions of models to be accessed and controlled within a single platform. This allows organisations to choose the most appropriate model for each use case – whether that’s transcription, summarisation, compliance checking or content enhancement – while maintaining absolute control over access and usage.

Traceability is built in.

If AI generates part of a clip, that segment can be flagged for enhanced editorial scrutiny. The prompt can be stored. The model used is recorded. The approval process is logged. An electronic and accessible audit trail exists by default, not as an afterthought.

This isn’t about embedding a limited number of models and hoping they cover every requirement. It’s about enabling organisations to use the best-fit models for their business in a way that is governed, auditable and aligned with their risk profile.

This approach enables your operation to move AI from experimentation to enterprise-grade implementation.

Trust Is the Competitive Advantage

For media brands, trust is the product. Audiences, clients and regulators are increasingly asking the same questions: Was AI involved? Was it checked? Who is responsible?

Being able to answer clearly and confidently isn’t just a compliance exercise. It’s a commercial advantage.

The organisations that will win in this next phase of AI adoption won’t be the ones who moved fastest. They’ll be the ones who scaled responsibly.

Control your inputs. Audit your outputs. Integrate AI intelligently. Embed governance.

Because while AI accelerates value, without the right management layer it drives risk just as quickly.

Some commentators describe the current landscape as ‘the AI Wild West’ – in that context the winners will be those with sufficient sheriffs, not the fastest guns.


What’s New in the BOLT Content Hub?

The BOLT Content Hub just got smarter. Our latest update brings a host of key improvements to help you work faster, collaborate better, and get more value from your content. Check out our Top 6!

Smarter Home Page

Your Home Page now puts what matters most front and centre:

  • Stay up to date – See recent uploads and activity at a glance.
  • Pick up where you left off – Quickly resume work on assets and collections.
  • Make it yours – Customise your Home Page for your workflow.
  • Search instantly – Jump straight into asset search from the Home Page.
Upload Portals

Collect content from external contributors securely and efficiently:

  • Easy setup – Create portals in a few clicks.
  • Full control – Manage access, expiry dates, and approved contributors.
  • Streamlined approvals – Workflows ensure files follow the right process.
  • Metadata at upload – Keep your library organised from day one.
  • On brand – Customise portal themes for a seamless contributor experience.
Accelerated Upload

Get files into BOLT faster and more reliably- content is discoverable and actionable the moment it arrives.

  • Built-in acceleration – No third-party licences needed.
  • Multi-part, multithreaded uploads – Speed through large files.
  • Optimised bandwidth – Smooth performance on any connection.
Version Comparison

Compare and manage versions with ease, and maintain accuracy and efficiency across your content library.

  • Sync playback – Compare previous and current versions side by side.
  • Quick uploads – Drag-and-drop new versions directly into assets.
Review and Approve

Collaborate and give feedback faster:

  • Mark and highlight – Draw attention to details and leave comments.
  • Timecode tracking – Link mark-ups to video/audio timestamps.
  • Customisable tools – Pick pen colours and undo mistakes easily.
  • Team visibility – View mark-ups from all users.
  • Broader support – Works on images and other non-video/audio assets too.
Subclips

Create and manage clips with precision. Repurpose and share content efficiently, without losing control or context.

  • Frame-accurate creation – Generate subclips from any video or audio.
  • Flexible rendering – Choose output options that fit your workflow.
  • Track relationships – See all subclips from the same parent asset.
  • Workflow automation – Send to YouTube, add intros, or configure custom processes.
Get in touch to explore how the BOLT Content Hub can make your workflows faster, smarter, and more connected.

Insights from the IBC Floor

Blue Lucy’s key takeaways from this year’s show: Attendance at this year’s IBC was apparently flat, and the numbers suggest market confidence is still a little fragile. But the show offered some fascinating opportunities for the Blue Team. It was a great to meet new industry professionals, take part in straight-talking panels, and feel the pulse of the industry.  Here’s what stood out:

YouTube, YouTube, YouTube

The phenomenal growth in YouTube viewing stats on Smart TVs has been well publicised and was the subject of much conversation on panels and on the show floor. If you don’t have a commercial strategy for YouTube you are missing a proven revenue stream and risk “slipping into irrelevance”.  Blue Lucy’s approach to YouTube, which allows operators to directly manage their inventories on the platform in the context of rights assertion and ensure take-down requests are adhered to, is nicely covered in our recent case study with Banijay Rights

FinOps is Fashionable

Well, not quite, but media companies are finding out that “cloud” in itself is not a strategy. CFOs are rightly paying closer attention to cloud costs, the operational benefits and tangible RoI – and are challenging some fashionable orthodoxies. FinOps has become a key function to ensure commercially sustainable operations, and it is becoming clear that hybrid models have a vital role to play in balancing flexibility and cost. For many media operations the most business-savvy strategy may be to sweat the assets that they have, and mitigate cost risk by migrating services to the cloud iteratively and validating the RoI as you go.  Blue Lucy’s project approach supports this model and delivers measurable value fast. Find out more about our iterative approach as point 6 in our earlier blog post.

Content Liquidity

We love this new term – well new to us anyway – as it embodies how we think about media supply operations. Our BLAM Content Factory helps unlock the commercial value of your inventory by providing highly configurable workflows to distribute your content on a huge range of consumer and intermediary platforms.  With hundreds of integrations, truly scalable automation, and fast to deploy capability the Content Factory delivers rapid time to value. Together with our BOLT Content Hub – a super simple to use content discovery platform – Blue Lucy delivers powerful business enablers.

 

Now to carry on with the follow up trails and business-focused ‘proof of concept’ implementations.  Thanks again to Team Blue for a brilliant show, the insightful conversations, and the great shirts!

 


What NAB told us about the future of media tech

This year’s NAB Show in Las Vegas marked a noticeable shift in the priorities of media and broadcast organisations. Gone are the days of chasing flashy, or “cool”, innovation for innovation’s sake. Instead, the conversations we had, and the interest in our solutions, made one thing clear: the industry is doubling down on practicality, efficiency, flexibility and value. As a technology partner, that message resonated with us. It validated our ongoing focus on delivering tools that don’t just push boundaries, but solve real-world challenges scalably, securely, and cost-effectively. Here are the key themes that shaped our NAB 2025 experience:

Cost Control is Now a Strategic Priority

Across the board, operational cost reduction has become the top agenda item. Many vendors push a “transformation” agenda but from users we heard most “measurable RoI,” “efficiency,” and “time to value.”

Instead of massive technology overhauls, customers are prioritising targeted improvements with measurable outcomes. Our BLAM integration and orchestration platform is designed to support such an approach, streamlining operations without requiring wholesale change – BLAM stood out as a natural fit.

Hybrid Cloud/Ground Is the New Norm

The industry’s cloud conversation has matured. It’s no longer about choosing between on-prem or cloud, it’s about finding the right balance based on operational business need and cost. Organisations are increasingly adopting hybrid architectures that maintain critical workflows on-premises while using the cloud to when its more efficient or cost effective.

Our platform was designed to support hybrid deployment from the outset and enables seamless integration across systems, services, and territories. 80% of our deployments are cloud-ground hybrid and deliver cost effective control, flexibility and scale.

Integration Is Essential, Not Optional

With consumer platforms seeming to grow exponentially across OTT, FAST, and social manual workflows simply can’t keep up.

The benefit of automation is a given, but integration delivers more value through reduced friction, removing manual process, and providing end-to-end visibility. At scale operational efficiency is no longer a business benefit, it’s a survival requirement.

Unlocking the Value of Content Archives

Companies are looking to mine their legacy libraries for untapped value, especially in digital and on-demand markets.

Our solutions support fast discovery, repackaging, and delivery through automation and easy to access tools.  Blue Lucy is helping customers monetise what they already have, without heavy lifting.

Fast Value, Real Accountability

Today’s buyers are sceptical of long, drawn-out transformation programs with vague promises and PowerPoint workflows. They want to see real value, fast.

At NAB, we heard over and over how important rapid deployment, measurable results, and continuous improvement have become. Our approach is that we are a long-term partner in outcomes and continuous service, not just a provider of products.

Empowering the Creator Economy Securely

With content creation becoming more decentralised – and the creator economy exploding – organisations need to give teams secure, flexible access to content.

Whether it’s internal creatives, freelancers, or partners, our platform ensures content is available wherever it’s needed, without compromising security or governance. That balance of ease of access and control is more critical than ever.

Looking Ahead

NAB 2025 confirmed what we’ve been hearing in conversations all year: innovation still matters, but it must be grounded in usability, agility, and value.  Blue Lucy is continuously building technology that meets those demands, today and into the future.


Saving money in the media supply chain

6 RULES TO LIVE BY

Arguably the top priority for media businesses in 2025 is to reduce operational costs. But that’s easier said than done when you’re faced with increasing content delivery demands for an ever-evolving consumer landscape. Based on our real-world experience working with international media organisations, we’ve put together six principles to help you save money across the content supply chain.

1. You can’t monetise what you can’t see.

There are vast libraries of unmanaged content running into the 10s of petabytes at a number of the big production labels – much of which isn’t even available as a browsable version. This media may as well not exist as you can’t monetise what you can’t see.  This is a common situation that’s increasing due to the significant M&A activity in the industry where multiple media catalogues have been combined from once separate production and distribution companies. But large volume content management is not an insurmountable problem. The first priority for any content owner or rights holder should be to bring everything under management so that you understand what content is in the library. Connect the MAM to the media, wherever it is, and have it register the material, generate a visible browse version, and hoover up as much metadata as you can find.  If there really is no data, a simple microservice program can automatically generate metadata based on the information contained in your filenames. You might be surprised by how far this will take you.

2. Nobody move anything, until you know what it is.

An end-to-end media supply chain is not synonymous with the cloud. Your media and the workflows involved in creating, managing and delivering content across various platforms and tools can be controlled and observable whether it’s in the cloud, on the ground (on-prem’) or any combination of the two. Bringing media under management doesn’t mean moving it to the cloud or anywhere else. Once you’ve registered the assets and made them visible  (see point 1 above) you can make an informed, value based, business decision as to where the most appropriate place is for them to be stored.  Too often we hear media execs talking about their cloud strategy rather than the business strategy and ‘the cloud’ as an outcome whereas it should be viewed as a component – albeit an extremely powerful component – of a business objective. 

3. Clean as you go.

Cloud services can deliver significant flexibility and efficiency. Equally, storing material in cloud storage has accessibility and security benefits.  But cloud storage is definitely not the least expensive option.  So, if you choose to migrate your content into the cloud, it’s worth getting your housekeeping done before you make the move.  First understand what you have, what condition it’s in, what the likely value is, and what rights you hold.  Other simple housekeeping tasks such as material deduplication or identifying minutes of colour black run out from a digitised tape, or camera pointing at ground rushes can also be carried out with the material in situ.  The 90’s broadcast engineers’ joke of operators inadvertently archiving hundreds of hours of colour bars isn’t quite so funny when you move from a $10/TB LTO to incrementally priced cloud storage.

For this reason, we tend to support a controlled migration of content and workflows to the cloud rather than the ‘forklift content and sort it out when it gets there’ approach.

4. You don’t need to shut down existing systems to modernise your operation.

Your ‘legacy’ systems can, and probably do, still deliver value. There’s no need to change your entire technology ecosystem just because you want to introduce new tools, streamline your workflows or take advantage of cloud scale. In a constantly developing technology landscape, the best approach is to integrate, not deprecate. Your MAM orchestration platform should integrate with both legacy and new technologies so that existing systems can continue to deliver value while new tools – such as AI applications – can be readily incorporated into workflows to support ever changing business needs.  At Blue Lucy we achieve this by using our BLAM microservice architecture to connect disparate systems and enable a controlled migration to modern workflows.

5. It doesn’t matter where the content is. No, really.

The actual location of your content should have no bearing on your ability to monetize it. Your MAM should provide easy and uniform access to all your assets, no matter where the ops’ team or the media is physically located.  On-prem and cloud storage should be viewed interchangeably rather than one being for operations, such as fulfilment, and the other for the “safety copy” because your MAM ‘knows’ where the material is and can be configured to use the most appropriate repository. And, if your operation is all cloud, then the need for a “safety copy” is moot as the cloud vendor can provide multi-copy resilient storage through simple configuration.  80% of Blue Lucy BLAM deployments are hybrid and use both cloud and ground storage.

6. Start small and work incrementally.                   

Big bang “Transformation” projects are dead, and good riddance.  Big projects cost big money, carry big risk and often end up in big disappointment (when they’re not quietly killed).  Even the successful ones take a long time to deliver value, and the true RoI rarely moves the needle on the CFO’s dashboard, certainly after the long-departed big consultancy company fees are included.

Instead, big projects can be delivered in small steps using modern service-based technology and open APIs.  That’s why we recommend focussing projects on end-to-end solutions for a thin, horizontal operational slice.  This approach proves the technology and the business case with minimal risk and delivers measurable value incrementally, building confidence with each slice and allowing a rapid change of direction if necessary. If your service provider or vendor can’t demonstrate value within 6 weeks of any project starting, you may want to reconsider working with them.

 


BLAM vs BOLT: what’s in a name?

Since its launch in 2020, Blue Lucy’s flagship product, BLAM, has also been the company’s only product. BLAM is a sophisticated workflow orchestration, system integration and media management platform, and both its core capability and the microservices which comprise its orchestration functionality have constantly evolved in that time. It has, however, remained Blue Lucy’s sole solution.  Until now.  At IBC 2024, Blue Lucy launched BOLT, a new product that’s described as a global gateway to content libraries for non-technical users. We asked Blue Lucy founder, Julian Wright, what prompted the decision to develop a new product and what distinguishes BOLT from BLAM.

Q: Can you give us an elevator pitch overview of Blue Lucy, BLAM and BOLT?

A : Blue Lucy’s ethos is to take an orthodoxy-challenging approach to solving media business problems.  Our core product, BLAM, is a sophisticated integration and orchestration platform designed to meet the complex and evolving business needs in production, localisation, and distribution. BLAM is an “enterprise” platform that serves the operational needs of multiple aspects of a media business.  Our new product, BOLT, is an operationally-simple offering designed to meet a fundamental, but most important, business need for any operation handling media assets – accessibility.

Q: What was the initial motivation for developing BOLT?

A : In BLAM deployments our implementation engineers and analysts tend to work with the media operations team within the ‘engine room’ of the operation.  In conversation with senior management teams, we were often surprised by statements along the lines of “your platform is great, the automation is really driving time to market and cost efficiency but at the executive level we would just like a really simple way to view our current inventory.” 

Q : So, is BOLT simply a media portal for viewing content?

That’s how it started, but it’s developed into more than that. On further examination, we uncovered a number of apparently simple requirements within the commercial business (i.e., outside of the technical content supply operation) that could be addressed by a toolset similar to BLAM.  Alongside the basic requirement to search and discover content is the ability to create showcases and viewing rooms and distribute these to sales prospects or internal marketing teams via secure links. Some customers want to create one or more branded ‘storefront’ portals to directly support sales or similar customer self-service functions.  On the subject of “portals” we saw a clear a common requirement within distributors to provide easy-to-use media upload portals to their production partners to allow them to push finished content and production metadata to the central content management and processing function.  This is particularly important to the aggregator distributors or production companies that hold a number of separate brands or labels under a broader umbrella.  BOLT satisfies all of these needs – it gives you effortless access to your content, provides an intuitive upload function and allows content owners to showcase and monetise content catalogues.

Q: Many of these capabilities are available in BLAM – what makes BOLT different?

A : All of these functions can be supported by BLAM as standard but as a collection of capabilities there was clearly a need for a commercially-focused product in its own right.  This is particularly the case in the overarching requirement that the tools needed to be easy to use – we took this one step further and defined the vision for the new product that it should have a zero-training requirement and be as intuitive to use as an on-line banking app – well the good ones at least.

Q: Are BLAM and BOLT totally independent products?

BOLT is built on the BLAM core technology and existing BLAM users may add the BOLT capabilities to extend the operational reach of their BLAM platform to support commercial and external business activities.  In this context, if BLAM is the engine room, BOLT is the viewing platform.  But BOLT is of course also available stand alone as a separate product.

Your content inventory represents the most significant proportion of the value of your media business. To maximise that value, make it accessible with BOLT from Blue Lucy.

 

 


Contact

    THANKS!

    We’ll be in touch soon