Blue Lucy and Censhare, part of Entirely, to debut integrated DAM and video orchestration workflows at NAB

LONDON, England and Denver, CO– April 14th, 2026.   Media orchestration specialists Blue Lucy have announced a new integration with marketing technology experts Censhare, debuting at NAB, that extends content orchestration, control and visibility across enterprise DAM and video operations.

The integration connects previously siloed systems within a governed workflow environment, enabling organisations to manage content movement, editing and distribution more efficiently across the media supply chain.

By combining Censhare’s enterprise DAM capabilities with Blue Lucy’s orchestration platform, the integration supports a more connected approach to video workflows for environments where automation, traceability and operational oversight are critical.

Blue Lucy founder Julian Wright said:
“The challenge for modern media operations is less about access to content than it is about control. This integration strengthens our ability to help organisations manage content across DAM and video operations, reducing friction, increasing visibility, consistency and, crucially, governance at every stage of the lifecycle.”

Chris Leaman, SVP Customer Experience at Censhare, added:
“This goes beyond simply sharing assets with a static MAM to support video workflows. By integrating with Blue Lucy’s workflow orchestration platform, we enable a true ‘round trip’ capability for video – from material ingest, through edit, approval and on to the demand-based delivery chains. It’s about providing content operations teams a joined-up and efficient way to produce and deliver content across on an enterprise basis.”

The integration will be showcased at NAB, where attendees can see how Blue Lucy and Censhare are helping organisations streamline complex media workflows to reduce friction and drive operational efficiency.

Future enhancements will see full project synchronising between the systems, enabling seamlessly access to content across the Censhare and Blue Lucy platforms.

In addition to its fully featured MAM capabilities, the Blue Lucy platform integrates with multiple editing tools and provides a high degree of automation for organisations managing complex, enterprise-scale media operations.

Attendees at NAB 2026 can see live demonstrations of Blue Lucy’s platform on Booth W2318, and of the Censhare DAM solution on Booth W1353.

 


Our Top 3 Predictions for NAB This Year

NAB always brings plenty of noise, big ideas, and confident predictions about the future of media tech. With the Sphere looming over Vegas like a modern-day crystal ball, it feels like the right backdrop for a few predictions of our own. So here are Blue Lucy’s Top 3 for NAB 2026.

Agree? Disagree? Let’s see where things land after a few days in Vegas.

 

  1. AI – the excitement is still there, but governance is taking centre stage

AI will dominate, as expected. But we think the tone has shifted. This year, the real conversation won’t be about what AI can do, it’ll be about how it’s controlled in real-world environments. Because once AI moves from pilot to production, governance stops being optional. It becomes the difference between scalable value and unmanaged risk.

Our bet: the most interesting conversations won’t be about models, but about boundaries.

 

  1. Hybrid isn’t a strategy anymore -it’s just how things are done

We’re past the point of debating cloud vs on-prem. Most organisations are already operating across both – in fact the recent DPP European Media Trends report indicates that nearly 80% of media organisations are operating a hybrid infrastructure. So if hybrid has quietly become the default reality, which systems genuinely support that, or are businesses still stitching environments together and calling it flexibility?

Our view: the platforms that win here will be the ones that were built to orchestrate workflows in hybrid deployments from day one, not adapted to it later.

 

  1. Value is under pressure – and patience is running out

Across both media organisations and large corporates, the message is consistent: prove value, and prove it quickly. Long roadmaps and abstract transformation stories are losing ground to immediate, measurable impact. It’s no longer enough for technology to be powerful. It has to be useful fast. We think this will quietly be one of the most important themes at the show, even if it doesn’t always make the biggest headlines.

Reducing costs is the hottest topic, which means value shapes every conversation.

 

Let’s check back after NAB

These are our predictions going in – but NAB has a habit of proving (or disproving) them pretty quickly. Let’s revisit once the dust settles and see what actually dominated the conversation. If you’re there, come and find us at Booth W2318 – or let us know what you’re seeing.


Built First. Built Better. Still Ahead.

They say imitation is the sincerest form of flattery and at Blue Lucy, we’re pleased to see our vision helping shape the direction of the industry.

While others may be working to close the gap, our orchestration engine already features a library of over 600 individually packaged microservices, ready to build into automated workflows that meet the complex and evolving needs of leading media operators, broadcasters, and brands.

This isn’t new territory for us. Blue Lucy has been delivering measurable business value in the field since 2020. In 2025 alone, more than 133 million microservices were executed across our user base representing a 30% increase year over year.

Recent pre-NAB announcements suggest a broader recognition across the market: highly complex platforms that depend on dedicated DevOps support can slow down, rather than accelerate, the delivery of business value.

From the outset, Blue Lucy has taken a different approach combining powerful configurability with genuine ease of use. Our intuitive low-code/no-code workflow builder, paired with a responsive and accessible user experience, enables teams to unlock operational value without the burden of unnecessary complexity.

For the real deal, and a real demo visit Booth W2318 at NAB


Blue Lucy Brings Order to the AI Wild West at NAB 2026

Orchestration platform enables broadcasters to deploy multiple AI models safely with full auditability, rights protection, and regulatory oversight.

LONDON, England – March 12th, 2026 – Blue Lucy will showcase its orchestration platform at NAB 2026 (Booth W2318), demonstrating how broadcasters and media companies can integrate multiple AI services into their production and content supply chain workflows while maintaining full governance, transparency, and control.

The platform acts as a central orchestration layer for AI-powered media workflows, enabling organisations to select the most appropriate models for tasks such as metadata generation, localisation, compliance checks, and content analysis, while ensuring every interaction is tracked, auditable, and aligned with organisational policy.

As AI adoption accelerates across production, marketing teams, and creative workflows, many teams are operating in the shadows, experimenting with fragmented tools and limited oversight, in what some industry commentators describe as the “AI Wild West”. Risks include models trained on unlicensed datasets that could create legal liability, automated tagging or editorial recommendations that lack transparency, and biases in AI outputs that affect metadata, moderation, or editorial decisions. The challenges are becoming more urgent as new regulatory frameworks are set to impose obligations for high-risk systems, placing new expectations on organisations to demonstrate transparency, accountability, and responsible AI use.

Blue Lucy’s orchestration platform directly addresses these challenges by enabling organisations to adopt AI services safely and at scale. Instead of locking customers into a single vendor ecosystem, the platform allows media operators to integrate multiple AI providers while enforcing governance rules around data residency, model usage, and content rights.

Some media technology commentators describe the current landscape as the AI Wild West. In that context the winners will be those with sufficient sheriffs, not the fastest guns,” said Julian Wright, Founder of Blue Lucy. “Our orchestration platform integrates with multiple AI services so media operators can access the most appropriate AI models for each workflow, while maintaining full control and auditability from input to output. That means our clients can scale their AI use to enterprise scale safely while protecting their brands, their content rights, and their compliance obligations.”

Example workflow in action
A broadcaster preparing a large archive of sports highlights for international distribution could automatically trigger a multi-step AI workflow orchestrated through Blue Lucy’s platform. One AI model generates metadata and identifies key moments in the footage, another produces multilingual captions and translations, while a third performs compliance checks against rights and usage restrictions. Blue Lucy’s orchestration layer manages the process end-to-end – routing tasks to the most appropriate AI services, applying human approval gates where required, and capturing a full audit trail of every input, decision, and output.

“Unlike single-vendor AI solutions, Blue Lucy lets broadcasters combine multiple AI models while enforcing governance policies, so teams can choose the right tool for each task without sacrificing control or compliance.”

Key features of Blue Lucy’s platform include:

  • AI-provider agnostic orchestration – Integrates a wide range of AI services, enabling best-fit model selection while meeting governance, rights, and data residency requirements.
  • Agentic orchestration with human oversight – Supports multi-step automated workflows with tool chaining, decision logic, and human approval gates.
  • Full transparency and auditability – Captures all AI interactions, inputs, outputs, and human moderation steps to create a complete, compliant audit trail.
  • Secure AI operations for media workflows – Provides visibility into where data is processed, who can access it, and how AI outputs are used.

Attendees at NAB see live demonstrations of Blue Lucy’s platform on Booth W2318, showcasing how broadcasters and media companies can accelerate AI adoption without scaling risk.

Make an appointment to see Blue Lucy at NAB here


Blue Lucy Technology

A deep dive into the platform.

Architecture

The Blue Lucy platform follows a distributed microservices architecture, meaning the overall operational capability is structured as a collection of loosely coupled services. This architecture is robust, resilient and conforms to the separation of concerns paradigm. The singularity of purpose which is a key tenet of separation means the platform is easy to maintain and extend. Equally there are significant opportunities for re-use of components which speeds up our development so that as new business requirements come in – such as a new 3rd party system to connector – we can implement the capability with unparalleled speed.

Overview 

The overall architecture comprises the database and two core Blue Lucy components: the Application Programming Interface (API) and the Workflow Runner (WFR).

The database is the single data repository of the system with the API and WFR being stateless services which allows automated horizontal scaling.  Blue Lucy hosted services are typically deployed in a Highly Available (HA) configuration comprising two API engines and two WFRs arranged behind a load balancer.

The API and WFR 

The API and WFR are containerised services and will run in any compute infrastructure which means that the platform is not only agnostic to the cloud infrastructure environment in which it runs but may also be run ‘on prem’ in any suitable container orchestrator such as Kubernetes. This affords maximum operational flexibility including running cloud-ground hybrid systems – an important capability in an industry in which most systems and media is currently located at the facility. Around 80% of our platform deployments are cloud-ground hybrid. The distributed architecture also supports worldwide operating models for globally distributed business operations.

Microservices

Within the platform orchestration layer there is a further abstraction between the WFR and the microservices which perform the operationally specific functions at run time. The microservices are individual executable components in their own right and separate from the WFR itself. This enables the microservices to be developed independently of the WFR and provides an extra level of safety at run-time. This true microservice architecture gives the Blue Lucy business unparalleled development scale and means that microservices may be developed by third parties. Microservices may be updated or new ones applied to live systems without any downtime or interruption, the new services are simply picked up by the WFR when called.

Our microservices run in the WFR and interact with the platform API and the API of the third-party services directly, enabling real time updates between the platform orchestrator and subsystems. There are currently more than 600 microservices available off the shelf of which approximately 250 are integration connectors to media and business systems.

Alongside the microservices the platform also has a range of plugins which are similar in construct and are equally hot pluggable but are designed for the integration of event driven architectures and are deployed as listeners.  Examples in use might be a plugin which is subscribed to a message queue listening for specific events, or an HTTP listener to extend the platform’s API.

Integrate and extend 

The platform has an open REST API, which is the same API used by the platform’s user interfaces (UI). The API programmatically is supported with embedded documentation created by Swagger and further documentation hosted in the online knowledge portal, Blue Lucy Central, which may be directly accessed within the applications. In addition, a full Software Development Kit (SDK) is available which allows developers to build microservices for the platform.

Available as a package from NuGet, the developer-friendly .NET SDK enables software engineers to code in their preferred IDE, such as Visual Studio, and provides helper features, such as IntelliSense, facilitating rapid and predictable development. The SDK supports rapid learning, with standard methods for accessing data, and provides a safe interface as the commands interact with the Blue Lucy API rather than lower level. The SDK allows developers to use any .NET-compatible library which provides the freedom to integrate any 3rd party component.

Using the SDK is more powerful than simply calling the API, as it utilises the WFR service to perform any 3rd party function or interaction. This has the potential to extend the useful functionality of the platform way beyond the usual media and broadcast systems to drive more business value for operator. The public SDK is the same tool that we use internally for development, so it is proven, robust and is regularly updated. An SDK is also available for Python, and the Python runtime environment is included with the WFR as standard.

Frameworks 

The platform is underpinned by .NET 10, the latest and fastest .NET framework which affords a truly cross-platform, open source, common language run-time environment. .NET 10 affords long term supportability, an excellent security model and is optimised for containerised deployments providing true enterprise level robustness.

For the tech ops and administrators ‘factory’ interface we use the Angular 20 framework from Google and for the production operations ‘hub’ view we use React developed and maintained by Facebook. Both frameworks provide an optimal user experience within the operational use cases. SignalR is used extensively in the user interface to provide real-time status updates as it eliminates the need for polling from the front-end to reduce chatter and provides users with instantaneous operational status updates.

For deployed platform observability we conform to industry standards, including supporting OpenTelemetry, to enable centralised logging, metrics and full tracing. You can even inject your own trace ID to fully integrate with upstream triggers or push the trace ID to downstream recipients to give an incredibly powerful and versatile full operational visibility.

Platform updates 

In development Blue Lucy follows continuous integration principals which drives a modular development approach and ensures robust and reliable operation.

Equally software updates, which are deployed automatically for Blue Lucy managed systems, always follow the continuous delivery paradigm with updates to the platform core services available to customer managed environments.

We are currently making major releases, typically pegged to specific new features or platform capabilities, between six and nine times a year. As the microservices are abstracted from the core these may be released to in production systems as required. In all cases updates may be deployed with zero, or near zero downtime.

Blue Lucy prefers to focus on the operational business value of our products rather than technology, but the overall architecture and our approach to software development is an important aspect of our value proposition. If you would like more information about the Blue Lucy software architecture, we are always happy to talk.


Blue Lucy’s 6 Key Tenets

Modern media operations demand a platform that unites automation, orchestration, and human oversight without compromise. In this post, we explore the six key tenets that underpin Blue Lucy’s technically robust, extensible, and operationally efficient platform built for high-volume, complex workflows across hybrid environments.

1. Combine core capabilities

‘Static’ MAMs without workflow orchestration do not provide the level of automation required for today’s high-volume operations. Equally workflow orchestration platforms that lack a comprehensive MAM component are insufficiently nimble to support the complexity of a modern media operation. Automation platforms that neglect the human operations negate the efficiency they deliver. The Blue Lucy platform combines these core capabilities to deliver maximum efficiency, control, and visibility.

2. Microservices not monolith

The Blue Lucy platform follows a distributed microservices architecture, meaning the overall functional capability is structured as a collection of loosely coupled services. The architecture is robust, resilient and conforms to the separation of concerns paradigm. The singularity of purpose is a key tenet and means the platform is easy to maintain and extend. It is comprised of more than 500 microservices which are orchestrated by the Workflow Runner service., making it the most readily extensible media integration platform.

3. No forking branches

Media is a complex operational business which has seen decades of evolution. There is an inevitable need for bespoke software components in all but the newest of operating models. It is still common for vendors to branch, or fork, the source code to create a version specific to a customer. This approach presents significant risk to operators. The platform has been structured so that Blue Lucy does not manage numerous code branches or build scripts for different customers. We have a single code base with customer specific bespoke microservices.

4. Integration, Integration, Inte…….

The most significant business value and operational gains come from integrated software and services. But operations built on supplier driven ‘ecosystem’ models are closed and are rightly described as vendor lock-in traps. Such approaches tend to work only as long as it commercially suits the members of the vendor cartel. The Blue Lucy platform has been designed to provide the integration layer between systems allowing technology abstraction whilst driving operational cohesion.

5. No code, low code, code

The platform has been designed to be accessible to operators with varying levels of business and technical expertise. It conforms to the ‘no-code’ paradigm and includes a drag-drop-connect-connect-configur e workflow builder, which is atomic, intuitive and means that business analysts can rapidly build and maintain complex operational workflows. In addition to the 500+ microservices the workflow builder allows secure scripting in C# or Python. Lower level, developer, access is provided through the API which is supported by an open SDK – the same SDK we use to develop the microservices.

6. Run anywhere

‘Cloud native’ sounds very modern, but binding an operational capability to the cloud, particularly specific cloud providers, is cloud dependent, the antithesis of the ethos behind the service orientated model. Equally media operations simply do not fit into an on-prem (ground) OR cloud model. Blue Lucy core services are containerised, and infrastructure agnostic enabling a controlled migration to the cloud or more typically a ‘hybrid’ cloud-ground deployment and operating model.

 

By combining extensible microservices, intuitive workflow orchestration, and hybrid deployment flexibility, Blue Lucy delivers a platform that empowers media operators to run complex workflows with control, visibility, and efficiency -turning technical sophistication into real-world operational impact.


Has Video outgrown your DAM?

Digital Asset Management systems sit at the heart of most marcoms operations. They centralise content, organise it, and make it discoverable. Integrated with the wider MarTech stack, DAM support governance and drive efficiency. But video has changed the brief.

Video is no longer an occasional campaign asset. It is now the dominant content format across marketing, product, internal communications, and customer engagement. And as volumes grow, so do the operational pressures.

The issue is not whether your DAM can store video.
The issue is whether your teams can discover, reuse, adapt, and govern it efficiently at scale.

Because when discovery slows, reuse drops.
And when reuse drops, costs rise – often without anyone noticing.


Video Is ‘Just Another File’ – Until It Isn’t

At a basic level, a video file is simply another digital asset. A DAM can store it, catalogue it, and apply metadata to it. But video carries characteristics that fundamentally change how it needs to be managed:

  • Format complexity. Video comes in a wide range of encoded formats (CODECs), each with different configurations – frame rates, encoding structures, audio arrangements. These aren’t cosmetic differences; they directly affect compatibility, quality, and approach to distribution.
  • File size and accessibility. Professional video files are large and often not web-browser compatible. That makes previewing, streaming, and collaboration harder within systems designed primarily for static media.
  • Time-based structure. Unlike images, video unfolds over time. Metadata doesn’t just apply to the whole asset – it applies to specific periods within it.
  • Localisation and variants. Subtitles, audio stems, regulatory edits, regional variations – these are related and often interdependent components, not just new versions of the same file.
  • Derivative creation. Social cutdowns, vertical edits, different durations – all need to maintain lineage back to the master asset to avoid duplication and rights infringements.
  • Ongoing editing cycles. Video assets are routinely adapted long after creation or first publication. Their lifecycle is longer, dynamic and continuous.

And perhaps most importantly, Creatives and marketers are rarely searching for a file.
They are searching for a moment – a product shot, a quote, a scene, a reaction. That distinction is where traditional DAM models begin to strain.

Finding the Right Moment – Not Just the Right Asset

Metadata has always powered discovery inside DAM systems. This object-based metadata – be it campaign, product, spokesperson, usage rights – works well when assets are static.

But video exists in two dimensions:

  • Catalogue metadata – information about the asset as a whole.
  • Temporal metadata – information tied to specific time periods within the asset.

A tag might say “Product X is in this asset,” but it won’t say whether that appears in the first five seconds or the last thirty. It won’t tell you if the segment you want to use is already in use elsewhere, or the rights have expired. That lack of clarity increases risk and kills efficiency.

At a small operational scale, teams can compensate with knowledge (memory), spreadsheets, and manual review.

At enterprise scale – across regions, agencies, languages, and campaigns – that approach quickly breaks down.

When discovery doesn’t deliver, teams instinctively create their own workarounds: local edits, shared folders, private versions – bypassing the DAM because it doesn’t give them what they need when they need it. That behaviour isn’t just inefficient, it erodes governance, inflates production costs, and reduces RoI from existing content.ntent.

 

The Hidden Cost of “Making It Work”

Most modern DAM platforms support video in some form. Many do so capably within the limits of their original design. But “supporting” video often means adapting workflows around a model designed for static assets.

That adaptation typically looks like:

  • Additional tools bolted on around the DAM
  • Manual reformatting and distribution processes
  • Workarounds for preview and playback
  • Fragmented metadata across systems
  • Disconnected rights tracking

Individually, these compromises feel manageable.
Collectively, they create friction — and friction increases exponentially as content volumes grow.

 

Managing Video Requires a Shift in Perspective

The real question isn’t: “Can our DAM store video?” It’s: “Are we managing video on its own terms?”

What’s emerging is not a rejection of DAM, but a more nuanced ecosystem:

  • DAM remains essential for governance, brand control, and enterprise-wide visibility.
  • Video-native systems handle time-based metadata, format complexity, version control, and high-volume processing.
  • Integration ensures both operate cohesively rather than competitively.

Savvy teams are not looking for a monolithic “silver bullet.” They are rethinking their architecture so that each specialised system – DAM, video indexer, transcoder, rights engine – contributes a distinct capability. The task then becomes enabling systems to collaborate, not forcing one to do everything. That mindset separates high-performing teams from those stuck patching processes.

 

Lessons from Media & Entertainment

These challenges are not new.  The Media & Entertainment sector has been solving them since the late 1990s through Media Asset Management (MAM) systems. For broadcasters’ manual processes were never viable.

Operating efficiently required:

  • Structured, time-aware metadata
  • High levels of automation
  • Tight integration between production and business systems
  • Clear orchestration across ingest, edit, versioning, and distribution

As corporate video demand begins to reach broadcast volumes, marcoms teams are encountering similar pressures, often without the infrastructure on which professional media organisations rely.

 

Automation Is No Longer Optional

With content demand chains growing exponentially, manual operations are becoming infeasible – even for mid-sized With marcoms content demand chains growing rapidly, manual operations are becoming infeasible – even for mid-sized teams.

Video management at scale requires orchestration:

  • Automated transcoding into multiple formats
  • Structured version control
  • Omni-channel distribution
  • Integrated rights and compliance management

Adjacent to automation is the need to reduce friction between systems. Modern media systems such as the Blue Lucy platform are built with structured integration frameworks designed to connect production tools, DAMs, and business systems efficiently.

Because the real risk isn’t just storage capacity.
It’s operational complexity, and the erosion of value from content you’ve already invested in creating.

 

The Real Challenge

Video is now the dominant marketing medium. Storing it is easy. Managing it intelligently – discovering moments, reusing content, coordinating derivatives, and maintaining governance at scale – is the real challenge.

Organisations that recognise this shift early are building integrated, automated video operations designed for growth.
Those that continue adapting static systems to dynamic media will find the friction, and the cost, only increases over time.

 


The AI Wild West, and Why it Needs a Sheriff

AI Is Scaling Faster Than Governance – And That’s a Risk

AI adoption hasn’t rolled out through neat transformation programmes. It has spread organically, driven by teams trying to move faster. It’s already embedded across newsrooms, marketing departments, communications teams, HR, legal and strategy functions. Often informally, and often without central oversight.

A producer indexes archive footage using an AI tool. A marketing team analyses sentiment. An editor runs a clip through a model to check for profanity.

Each action feels efficient, helpful, low risk. But collectively, they create something most organisations aren’t prepared for: AI embedded in core workflows without visibility, control or traceability.

Where did the data go? Which model was used? Was the output reviewed? Were any rights unintentionally waived in the process?

In many cases, no one has a complete picture. AI hasn’t outpaced governance because organisations are careless. It has outpaced governance because the tools are frictionless – and governance isn’t.

Reputational Risk Now Moves at Machine Speed

The reputational equation has fundamentally changed.

One hallucinated output. One biased summary. One automated decision that shouldn’t have been automated.

And it can be published, shared and amplified instantly.

For media organisations in particular, this is high stakes. Publishing misinformation is damaging enough. Publishing it at machine speed, with unclear accountability, compounds the impact. When something goes wrong, the questions are immediate:

Was AI involved? Was it checked? Who approved it?

If those answers aren’t clear and defensible, credibility takes the hit. AI doesn’t just scale productivity. It scales exposure.

Regulation Is Accelerating – and Accountability Is Personal

At the same time, regulation is catching up quickly. New frameworks demand transparency, oversight and traceability in AI-assisted decisions and content production. Executives are accountable, even when outputs are generated by third-party models. Yet many organisations cannot currently evidence which model produced a specific output, what data informed it, what safeguards were applied, or how the output was reviewed before release.

Policies may exist. Ethical principles are often well articulated. But unless they are embedded in operational systems, they don’t provide protection. The gap between intent and implementation is where risk lives.

Speed Versus Safety Is the Wrong Debate

There’s a perception that governance slows innovation. In reality, the absence of governance creates far greater friction later: retractions, investigations, legal exposure and long reputational repair cycles.

If AI was adopted to improve efficiency, reconstructing an audit trail across multiple disconnected tools defeats the purpose. Manually piecing together who used what, where and how is both time-consuming and unreliable.

The smarter approach is to embed governance directly into the workflow – so it happens automatically, not retrospectively. That’s where managed orchestration becomes critical.

Orchestration: Bringing Control to AI at Scale

What organisations need isn’t just access to AI models. They need control over how those models are selected, used and reviewed.

At Blue Lucy, we’ve focused on building that management layer.

Our orchestration engine has direct integration connectors to multiple AI service providers and platforms allowing millions of models to be accessed and controlled within a single platform. This allows organisations to choose the most appropriate model for each use case – whether that’s transcription, summarisation, compliance checking or content enhancement – while maintaining absolute control over access and usage.

Traceability is built in.

If AI generates part of a clip, that segment can be flagged for enhanced editorial scrutiny. The prompt can be stored. The model used is recorded. The approval process is logged. An electronic and accessible audit trail exists by default, not as an afterthought.

This isn’t about embedding a limited number of models and hoping they cover every requirement. It’s about enabling organisations to use the best-fit models for their business in a way that is governed, auditable and aligned with their risk profile.

This approach enables your operation to move AI from experimentation to enterprise-grade implementation.

Trust Is the Competitive Advantage

For media brands, trust is the product. Audiences, clients and regulators are increasingly asking the same questions: Was AI involved? Was it checked? Who is responsible?

Being able to answer clearly and confidently isn’t just a compliance exercise. It’s a commercial advantage.

The organisations that will win in this next phase of AI adoption won’t be the ones who moved fastest. They’ll be the ones who scaled responsibly.

Control your inputs. Audit your outputs. Integrate AI intelligently. Embed governance.

Because while AI accelerates value, without the right management layer it drives risk just as quickly.

Some commentators describe the current landscape as ‘the AI Wild West’ – in that context the winners will be those with sufficient sheriffs, not the fastest guns.


DAM Los Angeles

Video moves fast – can your DAM keep up?

Join Blue Lucy in Los Angeles for the West Coast’s leading Digital Asset Management event as we explore, celebrate, and accelerate The Intelligent Evolution of DAM.

Video isn’t just one more content format for your marketing function: it’s the dominant force driving engagement, brand awareness and sales.

Meet us at DAM LA to explore how our platform enables teams across the business to discover, manage, review, and publish video content in a way that’s straightforward, reliable, and ready to go.

 


NAB

The AI Wild West comes to NAB 2026 – and Blue Lucy is bringing a Sheriff.

The AI Wild West is here, and media organisations are feeling the heat.  On Booth W2318 we’ll be demonstrating how our orchestration platform enables you to take control of your content and production operation.  We give you the tools to tame the chaos, by integrating the most appropriate AI models for each task, maintaining full auditability, and protecting brand trust.

We make AI usage visible, controllable and accountable, and we’ll be showcasing how broadcasters, media companies and brands can accelerate AI adoption without increasing risk. Make your appointment now.

 


Contact

    THANKS!

    We’ll be in touch soon