This video is the second part of the CI/CD demo on OpenShift. In this video we will go through how a code change is propagated through the delivery pipeline and how we can prevent bad code to reach upper environments through automated unit tests and code analysis. Furthermore, we will look at developer workflow in Eclipse and JBoss Developer Studio and how to interact with OpenShift on the developer workstation.
Check out part I to learn how to setup the CI/CD infrastructure on OpenShift: https://www.brighttalk.com/webcast/14777/232569
In this video we will explore how to setup a CI/CD infrastructure on OpenShift by provisioning Jenkins as CI engine, Gogs as git server, Sonatype Nexus as repository manager and SonarQube for static code analysis in containers.
Furthermore, we will create a delivery pipeline using the new DSL-based Jenkins Pipeline plugin to build, test and deploy a sample application and promote it to upper environments.
This first part of the demo will focus on the environment setup.
Part II: https://www.brighttalk.com/webcast/14777/234807
Wer Continuous Delivery macht, muss Testing und Testdaten-Management neu denken.
Viele Firmen entwickeln heute bereits agil. In der Continuous Integration wird der Code täglich neu kompiliert und mit Unit-Tests getestet. Doch danach ist Schluss. Testing und Release bedeuten immer noch hohen manuellen Aufwand. Trotz der agilen Entwicklung wird nur zweimal pro Jahr released.
Wenn wir Continuous Delivery einen Schritt weiter treiben wollen, müssen wir nicht nur Komponenten, sondern ganze Systeme automatisiert testen. Hierfür brauchen wir Testumgebungen. Und Testdaten. Tonnenweise Testdaten. Manuelles Testdaten-Management reicht hier nicht, aus.
Bei der Analyse der Testdatenanforderungen stellen wir fest, dass wir falsch testen. Einige Datenkonstellationen werden gar nicht getestet, andere dafür doppelt und x-fach. Denn wenn wir den Code verändern, ändern wir nicht die Testfälle, sondern wir fügen einfach neue hinzu.
Wir müssen nicht nur das Testdaten-Management, sondern auch die Art, wie wir testen, radikal neu denken.
IPT und CA zeigen im Webinar wie modernes Testcase Design und Testdaten Management Qualität und Geschwindigkeit in der Softwareentwicklung positiv beeinflussen können.
Denn nur so kommen wir einen Schritt weiter in Richtung Continuous Delivery.
The speed of business is accelerating. In order to keep up, organizations need to adapt and transform in order to deliver software innovation in a continuous manner. You need a new approach and way of designing, deploying and scaling your applications from development through production and back again, because the innovation cycle never ends.
How can you steer your IT organization towards this new paradigm, and deliver the digital experiences that will differentiate your enterprise quickly and efficiently? In this webinar, you’ll learn how Financial Times is transforming its business with Heroku Flow - a powerful new continuous delivery (CD) toolchain that delivers on the promise of transformation by offering a new and flexible way to make CD visual, easy to manage, and accessible to all team members, from design and engineering, to product management, QA and operations - and discover how you can too.
Fast and accurate quality verification is important in every release of software—whether your organization releases software several times a day or once a year—and automated testing is the best way to do it. But if automated tests can’t be trusted, they'll prevent the adoption of DevOps and continuous delivery. So what’s the best way to test software releases?
Watch this webcast to learn how Red Hat® JBoss® Middleware can help you:
- Write fast and reliable automated tests for Java™ applications
- Use automated functional testing frameworks
- Automate regression testing to identify performance problems
1) Accelerated, error-free builds
Agile development creates more frequent builds. A CD platform that optimizes and parallelizes large builds across hundreds, or even thousands of cores is good. A solution that automatically detects dependencies to eliminate broken builds is even better.
2) Faster feedback
You shouldn’t have to wait until builds or tests are 100% complete before receiving feedback. A CD platform that allows real-time drill-down into warnings and errors as they occur give helps eliminate wasted time and CPU cycles.
3) Bulletproof and painless processes
CD isn’t continuous if jobs in flight are lost when a CI server goes own; or when it takes days for QA machines to be provisioned; or when deployments fail because of differences between QA and production. A CD platform should automate and normalize the build, test and deploy process across ANY environment (public, private or hybrid) with 1-click simplicity.
Foreign banks are increasingly looking to diversify their financing options. With careful planning, they can access US investors without subjecting themselves to the securities registration requirements applicable to public offerings, or the ongoing disclosure and governance requirements applicable to US reporting companies.
Speakers on this webinar will explain how non-US banks can pursue these funding avenues.
- Issuances exempt from registration under rule 144A;
- Issuances that rely on registration exceptions provided by Securities Act section 3(a)(2) for securities offered or guaranteed by banks;
- Setting up a rule 144A or bank note programme for straight debt;
- Issuing covered bonds in reliance on rule 144A or Section 3(a)(2);
- Yankee CD programmes; and
- Banking and securities regulatory requirements to consider before setting up an issuance programme.
- Brad Berman, Morrison & Foerster
- Jerry Marlatt, Morrison & Foerster
- Jack McSpadden, Citigroup
- Laura Drumm, Citigroup
- Danielle Myles, IFLR (moderator)
California and New York CLE credit will be offered for this webinar
Interaction between antigen-specific T cells and antigen presenting cells (APC) cognate ligand involve reorganization of the cytoskeleton and recruitment of adhesive and signaling molecules to the site of intercellular contact. Sustained adhesion of T cells to APCs and formation of the immunological synapse after T cell receptor stimulation are required for the antigen-specific response. One way to measure an immunological synapse is by fluorescently labeling the molecules that have been recruited to the synapse and imaging by fluorescence microscopy. However, immunological synapses are rare and therefore difficult to analyze objectively and statistically by traditional microscopy methods. To overcome these problems, we employed the Amnis brand imaging flow cytometers to objectively collect imagery of large numbers of cells. We report the percentage of T cells involved in an organized immunological synapse, the recruitment of adhesion molecule LFA-1 and signaling molecule Lck to the synaptic complex and subsequent translocation of NFkB from the cytoplasm to the nucleus in the T cell. In this study, Raji B cells loaded with Staphylococcal enterotoxin B (SEB) were incubated with human T cells to create T cell-APC conjugates. Cells were stained in various combinations for CD3, CD19, Actin, LFA-1, Lck and NFkB. Results from the FlowSight and the ImageStream imaging flow cytometers are compared. Using the FlowSight imaging flow cytometer we demonstrate image-based parameters that were used to assess the frequency of conjugates with an organized immunological synapse in an objective and statistically significant manner. Employing the ImageStream imaging flow cytometer we further evaluate the specific location of the adhesion and signaling molecules LFA-1 and Lck within the immunological synapse complex in T cells and measure the nuclear localization of NFkB in the T cell.Read more >
Containers are key to adopting DevOps and continuous integration / continuous deployment (CI/CD) across organizations. Docker provides a simple API to build and run containers; however, running a large number of containers across many hosts needs more than that. Kubernetes orchestration capabilities bring the operational support needed to run containers at scale on any type of infrastructure. OpenShift builds on top of Kubernetes to add a developer focused experience for building, distributing and running containers and put the dev back into DevOps.
Join this session to learn how to take advantage of Docker and OpenShift to automate application delivery through its entire life cycle—from source code all the way to an orchestrated application running in multiple containers.
Many enterprises embarking on OpenStack deployments are not aware of potential pitfalls during their implementation. Networking and application services represent one such area that can trip up even the most experienced network and cloud teams.
In this webinar, Nate Baechtold, Enterprise Architect at EBSCO Information Services - a pioneer and successful early adopter of OpenStack - will share his experience and information that he wishes he had before he started his OpenStack journey.
You will learn:
- Best practices and lessons learned from EBSCO’s successful OpenStack deployment
- Load balancing (LBaaS) and self-service considerations within the OpenStack environment and how to meet performance and availability requirements
- Building a CI/CD delivery model with blue-green deployments on top of OpenStack and software-defined load balancers
Learn about Electric Cloud's innovative products. A simple overview of what the products do.Read more >
Containers provide an easy way to package applications and deliver them seamlessly from development to test to production. This helps ensure consistency across a variety environments, including physical servers, VMs, or a private or public cloud. With all of these benefits, organizations are rapidly adopting containers to easily develop and manage the applications that add business value. However, as with any newer technology, enterprise use requires strong security at every stage. You need to think about security throughout the layers of the software stack, and you need to secure your continuous integration / continuous deployment (CI/CD) pipeline.
Join this session to learn about:
- The 10 layers of an enterprise-scale container deployment.
- The best ways to build security into each layer.
- How to manage security layers yourself, or deploy a container platform that includes built-in security features.
- How Red Hat Open Shift container platform can be used to deliver continuous security for containers.
When upgrading metro networks from 1 to 10 GigE, the distortions induced by chromatic dispersion and polarisation mode dispersion will limit the distance that can be attained. When upgrading core networks from 10 (OTU2, 10 GigE, etc.) to 100 (OTU3, 100 GigE, etc.) and soon to 400 Gbit/s, coherent systems will be able to post-compensate the linear distortions, but to what extent. What are the limits? What are the best fiber-characterization practices?
In this Webinar, we are going to reveal the myths and realities about the critical parameters that need to be controlled when deploying hybrid systems, such as 10 Gbit/s (NRZ) and 100 Gbit/s (DP-QPSK), including:
Foreign banks are increasingly looking to diversify their financing options. With careful planning, they can access US investors without subjecting themselves to the securities registration requirements applicable to public offerings, or the ongoing disclosure and governance requirements applicable to US reporting companies. This webinar will explain how non-US banks can pursue these funding avenues. Topics of discussion will include:
•Issuances exempt from registration under Rule 144A;
•Issuances that rely on registration exceptions provided by Securities Act Section 3(a)(2) for securities offered or guaranteed by banks;
•Setting up a Rule 144A or bank note program for straight debt;
•Issuing contingent capital or other securities convertible into equity upon the occurrence of a non-viability event;
•Yankee CD programmes; and
•Banking and securities regulatory requirements to consider before setting up an issuance program.
Anna Pinedo, Partner, Morrison & Foerster
Bradley Berman, Of Counsel, Morrison & Foerster
Tom Young, Managing editor, IFLR
Microservice architecture has been adopted by modern software teams as a way to deliver business value faster to address the DevOps goal of maximizing value by reducing cycle time. Container technology enables delivery of microservices into any environment. Docker has accelerated this by providing an easy to use toolset for development teams to build, ship, and run distributed applications. These applications can be composed of hundreds of microservices packaged in Docker containers. These containers are deployed and running on ANY IaaS (e.g. AWS, GCP, Azure or on-prem).
In a recent NGINX survey, the “biggest challenge holding back developers” is the trade-off between quality and speed. As Martin Fowler indicates, testing strategies in microservices architecture can be very complex. To address this complexity, we need to test in a real environment with real data – the “continuous delivery” (CD) phase of the lifecycle.
A framework enables developers working in the Docker ecosystem to easily test a complex system of microservices. For developers, it requires no behavior change. Developers gravitate to frameworks to speed up their development. Using frameworks also allows people to share best practices. A framework applied to testing of microservices in containers is a simple abstraction layer. Abstractions make life simpler for Developers building and deploying modern apps.
Join us for this webinar to learn more.
Today's rapid development pace demands continuous performance testing be an integral part of your continuous delivery pipelines. Jenkins, the leading open source automation platform, has emerged as the hub of continuous delivery (CD), and SOASTA and CloudBees, the enterprise Jenkins company, have tapped Jenkins to enable more test types and approaches that utilize cloud and agile process for continuously delivering higher quality web apps and services.
Watch this free webinar from SOASTA & CloudBees to gain insight on:
- Integrate realistic automated web performance tests into your continuous delivery pipelines managed by Jenkins
- Architect and launch a test environment that auto-provisions in the cloud
- Access the largest global test cloud for load generation
- Manage a load generation grid to drive load tests in a lights-out mode
- Establish a performance baseline in your daily Jenkins reports
- Execute tests in parallel with CD pipelines built and executed with Jenkins Workflow
As software becomes the competitive currency of the enterprise, investments are being made to embrace new distributed cloud applications, implement cloud platforms, and automate infrastructure provisioning. However, organizations are facing many challenges in regards to setting the correct path toward cloud native, delivering high quality & agile applications, all while keeping up with the increasing pace of business without sacrificing governance and operational efficiency.
Cisco, Apprenda and Redapt have joined hands to offer an end-to-end solution to alleviate these challenges and transform your datacenter into a secure, policy-driven application cloud platform.
Join us to learn:
• Where your organization currently fits on the cloud maturity model and what strategies can help you succeed at improving your cloud capabilities
• How enterprises can effortlessly cloud enable existing monolithic Java and/or.NET applications in addition to running docker based cloud native applications
• How to drive rapid development through DevOps, including continuous integration (CI) and continuous delivery/deployment (CD)
• Approaches to leverage both IaaS and PaaS to automate service provisioning, security, compliance and governance
• Ways to drive order of magnitude improvements in infrastructure utilization, developer productivity, organizational agility, and governance
• And more!
In the application economy where traditional business is being disrupted, velocity, agility, quality and customer experience is critical for enterprises to drive differentiation to remain competitive. While many believe that DevOps is only for startups and unicorns, forward-thinking organizations are focusing on unlocking the potential of their mainframe investments to drive their Business Technology culture, leveraging DevOps practices including agile development, automated CI/CD pipelines, testing and release automation, to deliver differentiated products and services.
Join guest speaker Rob Stroud, Principal Analyst at Forrester, and Dana Boudreau, Senior Director of Product Management at CA Technologies, for an in-depth discussion on how bringing DevOps to the mainframe can challenge traditional development methodologies, delivering code faster with greater quality in responding to escalating market demands.
DevOps teams are challenged with monitoring, tracking and troubleshooting issues in a context where continuous integration servers and DevOps tools emit their own logging data. Machine data can come from numerous sources, and CD tools may not agree on a common method. Once log data has been acquired, assembling meaningful real-time metrics such as the condition of your host environment, the number of running containers, CPU usage, memory consumption and network performance can be challengingRead more >
InSpec is an open-source testing framework with a human-readable language for specifying compliance, security and other policy requirements. Just as Chef treats infrastructure as code, InSpec treats compliance as code. The shift away from having people act directly on machines to having people act on code means that compliance testing becomes automated, repeatable, and versionable.
Traditionally, compliance policies are stored in a spreadsheet, PDF, or Word document. Those policies are then translated into manual processes and tests that often occur only after a product is developed or deployed. With InSpec, you replace abstract policy descriptions with tangible tests that have a clear intent, and can catch any issues early in the development process. You can apply those tests to every environment across your organization to make sure that they all adhere to policy and are consistent with compliance requirements.
Inspec applies DevOps principles to security and risk management. It provides a single collaborative testing framework allowing you to create a code base that is accessible to everyone on your team. Compliance tests can become part of an automated deployment pipeline and be continuously applied. InSpec can be integrated into your software development process starting from day zero and should be applied continuously as a part of any CI/CD lifecycle.
In this webinar, we’ll explore how InSpec can improve compliance across your applications and infrastructure.
Join us to learn about:
- What’s new in InSpec 1.0
- InSpec enhancements for Microsoft Windows systems
- Integration between InSpec and Chef Automate
Who should attend:
Security experts, system administrators, software developers, or anyone striving to improve and harden their systems one test at a time.
Driven to a large extent by the increasing use of wireless bandwidth, mobile fiber infrastructures or fiber-to-the-antenna (FTTA) networks are growing very fast. Indeed, the fast adoption of smartphones and mobile video streaming is generating so much bandwidth demand that unprecedented data rates are now being deployed in mobile backhaul and metro rings, often at 10 Gbit/s. This webinar will discuss the typical topologies and challenges of these 10G mobile fiber network rollouts. Although some of the common issues inherent to 1G or 2.5G deployments (such as fiber loss problems) certainly still apply at 10G, new impairments such as chromatic dispersion (CD) and polarization mode dispersion (PMD) emerge due to the higher data rates and longer distances covered by the wireless backhaul. In addition, impairments such as optical return loss (ORL) have tighter requirements at 10G than at 2.5G. Accordingly, this webinar will examine common impairments observed at 10G in wireless backhaul networks, and present best testing practices to ensure trouble-free operation of these networks.Read more >
Foreign banks are increasingly seeking to diversify their financing opportunities.
With careful planning, banks can access US investors without subjecting themselves to the securities registration requirements applicable to public offerings and to ongoing disclosure and governance requirements applicable to US reporting companies.
This IFLR web seminar, in association with Morrison & Foerster, tackles the issue. Topics will include:
- Issuances exempt from registration under Rule 144A,
- Issuances that rely on the exception from registration provided by Securities Act Section 3(a)(2) for securities offered or guaranteed by banks,
- Setting up a Rule 144A or bank note program for straight debt, structured products or other securities,
- Issuing covered bonds in reliance on Rule 144A or Section 3(a)(2),
- Yankee CD programs, and
- Banking and securities regulatory requirements to consider prior to setting up an issuance program.