Hi [[ session.user.profile.firstName ]]

Java in the database–is it really useful? Solving impossible Big Data challenges

Since 1999, Oracle has included a Java Virtual Machine (JVM) within the database. That makes it old enough to drive and well past time to get a real job. In today’s data-obsessed world, that job is fortifying Oracle’s database with a healthy dose of analytics to give your database the power to handle the data challenges of the 21st century.

There are numerous advantages to adopting a 100% Java code base for in-database analytics. Security is doubly enhanced by performing all analytics in the database. The code is highly portable as the identical java classes that run in the database will run on any client with any operating system. Plus the modern paradigm of taking the algorithms to the data is elegantly achieved with minimal effort.

Until now, a single Java solution with all these qualities wasn’t available. By using JMSL Numerical Libraries, you get a suite of algorithms with routines for predictive analytics, data mining, regression, forecasting, and data cleaning. JMSL is scalable and can be used in Hadoop MapReduce applications. Now, JMSL Numerical Libraries makes Java in the database more than useful -- it makes it unbeatable.

This webinar walks through the argument of why embedded analytics is better and provides examples using an Oracle database and JMSL. And if you’re not convinced, tell us so in the live, interactive Q&A!
Recorded Oct 8 2015 46 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Wendy Hou, Product Manager & Mark Sweeney, Sales Engineer, Rogue Wave Software
Presentation preview: Java in the database–is it really useful? Solving impossible Big Data challenges

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Building Enterprise Scale Solutions for Healthcare with Modern Data Architecture Recorded: Nov 10 2016 47 mins
    Ramu Kalvakuntla, Sr. Principal, Big Data Practice, Clarity Solution Group
    We all are aware of the challenges enterprises are having with growing data and silo’d data stores. Businesses are not able to make reliable decisions with un-trusted data and on top of that, they don’t have access to all data within and outside their enterprise to stay ahead of the competition and make key decisions for their business.

    This session will take a deep dive into current Healthcare challenges businesses are having today, as well as, how to build a Modern Data Architecture using emerging technologies such as Hadoop, Spark, NoSQL datastores, MPP Data stores and scalable and cost effective cloud solutions such as AWS, Azure and BigStep.
  • Data at the corner of SAP and AWS Recorded: Nov 9 2016 48 mins
    Frank Stienhans, CTO, Ocean9
    Past infrastructures provided compute, storage and network enabling static enterprise deployments which changed every few years. This talk will analyze the consequences of a world where production SAP and Spark clusters including data can be provisioned in minutes with the push of a button.

    What does it mean for the IT architecture of an enterprise? How to stay in control in a super agile world?
  • 3 Critical Data Preparation Mistakes and How-to Avoid them Recorded: Oct 20 2016 32 mins
    Mark Vivien, Business Development, Big Data
    Whether you're just starting out or a seasoned solution architect, developer, or data scientist, there are most likely key mistakes that you've probably made in the past, may be making now, or will most likely make in the future. In fact, these same mistakes are most likely impacting your company's overall success with their analytics program.

    Join us for our upcoming webinar, 3 Critical Data Preparation Mistakes and How to avoid them, as we discuss 3 of the most critical, fundamental pitfalls and more!

    • Importance of early and effective business partner engagement
    • Importance of business context to governance
    • Importance of change and learning to your development methodology
  • Practical Data Cleaning Recorded: Oct 13 2016 38 mins
    Lee Baker, CEO, Chi-Squared Innovations
    The basics of data cleaning are remarkably simple, yet few take the time to get organized from the start.

    If you want to get the most out of your data, you're going to need to treat it with respect, and by getting prepared and following a few simple rules your data cleaning processes can be simple, fast and effective.

    The Practical Data Cleaning webinar is a thorough introduction to the basics of data cleaning and takes you through:

    • Data Collection
    • Data Cleaning
    • Data Classification
    • Data Integrity
    • Working Smarter, Not Harder
  • Self-service BI for SAP and HANA – Dream or Reality? Recorded: Sep 14 2016 48 mins
    Swen Conrad, CEO, Ocean9
    Gartner predicts that “analytics will be pervasive … for decisions and actions across the business.” Sounds like analytics nirvana with instant access for any analysis you want to do, in other words self-service BI. Is this dream or reality?

    Join this webinar to find out how clouds like AWS or Azure are moving the industry close to this nirvana today through simple assembly of cloud services combined with the appropriate consumption model of these services.

    We will demonstrate how easy it is to provision your high end SAP HANA Database right next to your BI Analytics tier.

    Maybe we are closer to this nirvana than you think?
  • The Role of FPGAs in SparK Accelerators Recorded: Aug 29 2016 61 mins
    Shreyas Shah, Principal Data center Architect, Xilinx
    In the cloud computing era, data growth is exponential. Every day billions of photos are shared and large amount of new data created in multiple formats. Within this cloud of data, the relevant data with real monetary value is small. To extract the valuable data, big data analytics frame works like SparK is used. This can run on top of a variety of file systems and data bases. To accelerate the SparK by 10-1000x, customers are creating solutions like log file accelerators, storage layer accelerators, MLLIB (One of the SparK library) accelerators, and SQL accelerators etc.

    FPGAs (Field Programmable Gate Arrays) are the ideal fit for these type of accelerators where the workloads are constantly changing. For example, they can accelerate different algorithms on different data based on end users and the time of the day, but keep the same hardware.

    This webinar will describe the role of FPGAs in SparK accelerators and give SparK accelerator use cases.
  • Using Predictive Analytics to optimize Application operations: Can you dig it? Recorded: Jul 22 2016 23 mins
    Lesley-Anne Wilson, Group Product Rollout & Support Engineer, Digicel Group
    Many studies have been done on the benefits of Predictive Analytics on customer engagement in order to change customer behaviour. However, the side less romanticized is the benefit to IT operations as it is sometimes difficult to turn the focus from direct revenue impacting gain to the more indirect revenue gains that can come from optimization and pro-active issue resolution.

    I will be speaking, from an application operations engineers perspective, on the benefits to the business of using Predictive Analytics to optimize applications.
  • Predictive and Prescriptive Power Discovery from Fast, Wide, Deep Big Data Recorded: Jul 22 2016 45 mins
    Kirk Borne, Principal Data Scientist, Booz Allen Hamilton
    I will summarize the stages of analytics maturity that lead an organization from traditional reporting (descriptive analytics: hindsight), through predictive analytics (foresight), and into prescriptive analytics (insight). The benefits of big data (especially high-variety data) will be demonstrated with simple examples that can be applied to significant use cases.

    The goal of data science in this case is to discover predictive power and prescriptive power from your data collections, in order to achieve optimal decisions and outcomes.
  • Live Webinar: Overcoming the Storage Challenges Cassandra and Couchbase Create Recorded: Jun 30 2016 53 mins
    George Crump, Storage Switzerland
    NoSQL databases like Cassandra and Couchbase are quickly becoming key components of the modern IT infrastructure. But this modernization creates new challenges – especially for storage. Storage in the broad sense. In-memory databases perform well when there is enough memory available. However, when data sets get too large and they need to access storage, application performance degrades dramatically. Moreover, even if enough memory is available, persistent client requests can bring the servers to their knees.

    Join Storage Switzerland and Plexistor where you will learn:

    1. What is Cassandra and Couchbase?
    2. Why organizations are adopting them?
    3. What are the storage challenges they create?
    4. How organizations attempt to workaround these challenges.
    5. How to design a solution to these challenges instead of a workaround.
  • Big-Data-as-a-Service: On-Demand Elastic Infrastructure for Hadoop and Spark Recorded: Jun 22 2016 56 mins
    Kris Applegate, Big Data Solution Architect, Dell; Tom Phelan, Chief Architect, BlueData
    Watch this webinar to learn about Big-Data-as-a-Service from experts at Dell and BlueData.

    Enterprises have been using both Big Data and Cloud Computing technologies for years. Until recently, the two have not been combined.

    Now the agility and efficiency benefits of self-service elastic infrastructure are being extended to big data initiatives – whether on-premises or in the public cloud.

    In this webinar, you’ll learn about:

    - The benefits of Big-Data-as-a-Service – including agility, cost-savings, and separation of compute from storage
    - Innovations that enable an on-demand cloud operating model for on-premises Hadoop and Spark deployments
    - The use of container technology to deliver equivalent performance to bare-metal for Big Data workloads
    - Tradeoffs, requirements, and key considerations for Big-Data-as-a-Service in the enterprise
  • The Big Data decision path incorporating SAP landscapes Recorded: Jun 8 2016 49 mins
    Swen Conrad, CEO, Ocean9
    Leading companies derive big data technology choices from business needs instead of technology merits. With the variety of possible use cases, either Hadoop, Spark or SAP HANA may provide the best fit to solve business challenges and create value.

    Sounds easy, but managing a variety of big data solutions within a single company puts a skills and cost premium on the organization.

    This session will guide you to the right big data technology according to business needs and highlights the fastest path to adoption.
  • Case Study in Big Data and Data Science: University of Georgia Recorded: May 11 2016 61 mins
    Shannon Quinn, Assistant Professor at University of Georgia; and Nanda Vijaydev, Director of Solutions Management at BlueData
    Join this webinar to learn how the University of Georgia (UGA) uses Apache Spark and other tools for Big Data analytics and data science research.

    UGA needs to give its students and faculty the ability to do hands-on data analysis, with instant access to their own Spark clusters and other Big Data applications.

    So how do they provide on-demand Big Data infrastructure and applications for a wide range of data science use cases? How do they give their users the flexibility to try different tools without excessive overhead or cost?

    In this webinar, you’ll learn how to:

    - Spin up new Spark and Hadoop clusters within minutes, and quickly upgrade to new versions

    - Make it easy for users to build and tinker with their own end-to-end data science environments

    - Deploy cost-effective, on-premises elastic infrastructure for Big Data analytics and research
  • Using the Cloud for Speed-of-Thought Analytics on All Your Data Recorded: Apr 28 2016 64 mins
    Snowflake Computing, Ask.com, Tableau
    1.5 TB of data per day? No problem! Learn how Ask.com turned to Snowflake’s cloud-native data warehouse combined with Tableau’s data visualization solution to address their challenges.

    Ask.com and its parent family of premium websites operate in an extremely competitive environment. To stand out in the crowd, the huge amounts of data generated by these websites needs to be analyzed to understand and monetize a wide variety of site traffic.

    Their challenges:
    Ask.com’s previous solution of Hadoop + a traditional data warehouse was limiting their analysts’ ability to bring together and analyze their data.
    - Significant amounts of custom processing to bring together data
    - Performance issues for data users due to concurrency and contention challenges
    - Several hours to incorporate new data into analytics.

    Join Ask.com, Snowflake Computing, and Tableau for an informative webinar where you’ll learn:
    - How Ask.com simplified their data infrastructure by eliminating the need for Hadoop + a traditional data warehouse
    - Why Ask.com’s analysts are able to explore and analyze data without the frustration of poor, inconsistent performance
    - How Ask.com’s widely distributed team of analysts can now access a single comprehensive view of data for better insights
  • CapSpecialty - Leveraging Data to Deliver Faster Business Results Linked to KPIs Recorded: Apr 27 2016 47 mins
    MicroStrategy, Snowflake and CapSpecialty
    CapSpecialty is upping its game to become the preferred provider of specialty insurance products using MicroStrategy Analytics and Snowflake Cloud Data Warehousing.

    CapSpecialty’s investment to overhaul its data pipeline and management systems has delivered fast and measurable results. The stage has been set for CapSpecialty executives to view dashboards that display real-time profitability and KPIs. Insurance analysts and underwriters have self-service access to 10 years’ worth of governed data, allowing them to analyze customer trends and view product performance by category, geography, and agent. CapSpecialty is witnessing measurable business results from the engines that power their BI environment: MicroStrategy enterprise analytics platform firmly integrated with Snowflake’s cloud-based elastic data warehouse.

    Attend this webcast to learn how CapSpecialty has combined enterprise analytics with an elastic cloud-based data warehouse, a solution that serves as the cornerstone of their agile, metrics-focused culture.

    Join us live!
  • Analyzing Data from the Internet of Things Recorded: Apr 14 2016 49 mins
    Vaidy Krishnan, Tableau
    What do a jet engine and a pacemaker have in common? Data. They’re generating lots of it, along with millions of other connected devices being used right now. The Internet of Things is a powerful, interactive ecosystem that is generating unprecedented amounts of data.

    But there is a myth that you have to be an analyst or an expert to dive into this data. In fact, device analytics is for everyone. How can the everyman benefit from this data? How can we analyze this information to learn more about ourselves? How can it improve our world?

    In this 45-minute webinar, we’ll cover tips, tricks, and best practices to visualize and understand device data and put it to meaningful use.
  • Breaking Through Your Data Bottleneck with Agile ETL Recorded: Apr 7 2016 32 mins
    Mark Marinelli, CTO, Lavastorm
    Is your access to data via ETL channels creating a bottleneck in your business? Traditional ETL tools are exceedingly good at moving large quantities of data from one place to another repeatedly, reliably, and efficiently. For a large class of problems, where time-to-value is critical and applications need to be flexible as business requirements change, these ETL tools and waterfall projects are not a viable solution.

    Alternative technologies are now available which marry self-service data preparation with enterprise data management capabilities to accelerate value delivery and accommodate change, without reducing the scale, scope, and rigor of data analysis.

    Combined with new skills and new ways of thinking about data discovery, these new tools power a new agile process which yields more accurate results more quickly, moving beyond the traditional ETL bottleneck to an environment of continuous value delivery.

    In this webinar you will learn:
    • Where traditional ETL is and isn’t well suited to data analysis and BI projects
    • The requirements for Agile ETL - people, processes, and tools
    • How an agile approach can generate ROI more quickly, and with more trusted results
    • How agile ETL lays the foundation for more flexible, responsive data analysis in the future, as business context and systems change
  • More Databases. More Hackers. More Audits. Recorded: Mar 31 2016 60 mins
    Terry Ray, Chief Product Strategist and Cheryl O'Neil, Product Marketing Director
    Exploding data growth doesn’t mean you have to sacrifice data security or compliance readiness. The more clarity you have into where your sensitive data is and who is accessing it, the easier it is to secure and meet compliance regulations.

    Attend this webinar to learn how to:
    * Detect and block cyber security events in real-time
    * Protect large and diverse data environments
    * Simplify compliance enforcements and reporting
    * Take control of escalating costs.
  • Modern BI with Platfora Big Data Discovery Recorded: Mar 22 2016 46 mins
    Denise Hemke, Director of Product Management, Platfora
    Companies have realized significant business outcomes using Modern BI technology. From achieving a 37% increase in adspend efficiency to avoiding a $1 million dollar advertising expense. Platfora’s Big Data Discovery has helped companies become truly data driven. Come see how Platfora makes finding — and visualizing — big data insights across your organization easier than ever.

    Join us and see how Platfora’s big data discovery enables you to:
    • Quickly and easily prepare raw data without IT support
    • Find patterns and derive insights at drag-and-drop speed
    • Work seamlessly with the BI or visualization tool of your preference
  • Building Real-Time Data Pipelines with Spark Streaming, Kafka, and Cassandra Recorded: Mar 16 2016 62 mins
    Nik Rouda, Senior Analyst for Big Data at ESG; and Nanda Vijaydev, Director of Solutions Management at BlueData
    Join this webinar to learn best practices for building real-time data pipelines with Spark Streaming, Kafka, and Cassandra.

    Analysis of real-time data streams can bring tremendous value – delivering competitive business advantage, averting potential crises, or creating new revenue streams.

    So how do you take advantage of this "fast data"? How do you build a real-time data pipeline to enable instant insights, immediate action, and continuous feedback?

    In this webinar, you'll learn:
    *Research from analyst firm Enterprise Strategy Group (ESG) on real-time data and streaming analytics
    *Use cases and real-world examples of real-time data processing, including benefits and challenges
    *Key technologies that ensure high throughput, low-latency, and fault-tolerant streaming analytics
    *How to build a scalable and flexible data science pipeline using Spark Streaming, Kafka, and Cassandra

    Don’t miss this webinar. Find out how to get started with your real-time data pipeline today!
  • Top 8 Big Data Trends for 2016 Recorded: Mar 10 2016 45 mins
    Dan Kogan, Eric Hannell, and Jeff Feng
    Every year at Tableau, we look back at the last 12 months and evaluate the ways in which technology is changing the face of business decisions. That discussion drives our list of top big data trends for the following year.

    In this 45-minute webinar, explore:

    Emerging trends in big data
    Tableau experts' take on the changing big data landscape
    Considerations for your 2016 big data strategy
    Tune in to submit questions during the live Q&A with our panelists and attendees.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Java in the database–is it really useful? Solving impossible Big Data challenges
  • Live at: Oct 8 2015 5:00 pm
  • Presented by: Wendy Hou, Product Manager & Mark Sweeney, Sales Engineer, Rogue Wave Software
  • From:
Your email has been sent.
or close