Big Data, Big Objects

by Martín de León
Salesforce Developer at Nimacloud

What is  Big Data?

Big Data is defined as data that exceeds the processing capacity of conventional database systems. The data are: too large, moving too fast or do not conform to the structures of the databases.

Is used to dominate the volume, speed and variability of the massive data. Within these data are valuable patterns and information, previously hidden due to the amount of work required to extract them. For larger companies, such as Walmart or Google, this power has been available for some time, but at a very large cost. The current basic hardware, cloud architectures and open source software carry Big Data processing within the reach of those less endowed with resources.

The processing of large data has become accessible even for small startups, who can economically rent server time in the cloud.

The value of big data for an organization is divided into two categories: analytical use and enabling new products.

Big Data analysis can reveal ideas previously hidden by data that are too costly to process, such as peer influence among clients, revealed when analyzing buyer transactions and social and geographic data.

Being able to process each item of data in a reasonable amount of time eliminates the annoying need for sampling and promotes an investigative approach to the data, in contrast to the somewhat static nature of executing predetermined reports.

As a general term, “big data” can be quite nebulous, in the same way that the term “cloud” encompasses various technologies. The input data to big data systems could be social network conversations, web server logs, traffic flow sensors, satellite images, audio transmissions, banking transactions, MP3 files, web page content, government document scans , GPS routes, telemetry of automobiles, data of the financial market.

 

Salesforce and Big Data

A few years ago I remember having to process about 100,000 records, (with a somewhat complicated logic is true) but even so it did not seem a lot of data and even then we were a good time to find a solution that did not exceed any limit of Salesforce.

That made me look for Salesforce’s solution to deal with Big Data and in fact at that time I did not find anything, but I had no doubt that sooner or later Salesforce would get on the Big Data train, it was logical.

Big objects provide sustained performance for one billion records or more.

There are two types of big objects.

Standard Big Objects defined by Salesforce and included in Salesforce products.

Field History Archive, part of our product Field audit tracking, is an example of a standard big object.

Custom Big Objects that you define and implement through the metadata API. To define a custom big object, create an object file that contains its definition, fields and index, together with a permission set to define the permissions for each field, as well as a package file to define the content of the metadata of the object. The fields defined in the index of a big object determine the identity and the ability to be consulted by the big object.

Some ways for using Custom Big Objects Although you can use big objects to store various types of data, big objects were created to handle some specific scenarios.

 360 ° view of the client

It has a large amount of customer information that you want to store. From loyalty programs to transactions, orders and billing information, use a custom big object to track every detail.

 Audit and follow-up

Maintain a long-term view of the use of your Salesforce users for analysis or legal compliance purposes.

Historical archive
Keep access to historical data for analysis or legal compliance purposes while optimizing the performance of your main CRM or Force.com applications.

Queries with Big Objects
The Big Objects can be consulted using the SOQL of a lifetime or with asynchronous SOQL.

SOQL
It is used if you know that the query will return a few data, you do not want to wait for the results or you need the results returned immediately for use in Apex.

SOQL asynchronous

We develop asynchronous SOQL to help manage the millions and millions of potential records in your custom big objects. Asynchronous SOQL is a way to execute SOQL queries in situations where you can not expect results in real time due to the huge size of the data that is queried. It is a highly extensible solution that uses a subset of SOQL commands, making it easy for anyone familiar with SOQL to use. SOQL asynchronous program and executes queries asynchronously in the background, so that you can run queries that normally time out with normal SOQL. With asynchronous SOQL, you can execute multiple queries in the background while monitoring its execution status. Set up your queries and come back a few hours later with an excellent set of data to work with. Asynchronous SOQL is the most efficient way to process a large amount of data in a big object.

Limitations

Big objects only allow permissions of objects and fields. You must use the metadata API to define or add a field to a custom big object. You can not do it through the user interface.

SOQL relationship queries are based on a search field from a big object to a standard or custom object in the field selection list (not filters or subqueries).

Big objects support custom Salesforce Lightning and Visualforce components instead of the standard user interface elements (home pages, detail pages, list views, and so on).

You can create up to 100 big objects per organization. The limits for the fields of big objects are similar to the limits of the custom objects, and depend on the type of license of your organization.

Big objects do not support transactions that include big objects, standard objects, and custom objects. To cover the scale of the data in a big object, you can not use triggers, flows, processes and the Salesforce application.

 

 

This post is part of the conference “Big Data, Big Objects” presented at Punta Dreamin’18

We were in Punta Dreamin’18!

Last March, we were platinum sponsors of Punta Dreamin’18, the conference of the  Latin American Salesforce community, at the Conrad hotel, in Punta del Este, Uruguay.

During the conference, we enjoyed talks from Sofia Rodriguez Mata, Vladimir Gerasimob,  Don Robins, Angela Mahoney, Elena Inurrategui, among other international representatives of the Salesforce community. This year, we also share with several local speakers.

Many topics were presented and discussed, such as Block Chain, Salesforce Einstein, Salesforce for Non Profits, Lightning Design System, Continuous Integration Testing, and Big Objets.

In this opportunity, in addition to sponsoring the event, we participated with  two talks, Big Data,  Big Objects, by our developer Martin de León, and another, Empower QA, by our QA Analysts, Camila Pose and Florencia Viurrarena. For our collaborators, it was their first experience in a conference with these characteristics and importance, and they avoided the challenge with excellent results.

We also presented “Basket Force”, a basket game designed by our team, based on the integration of a basketball board with Salesforce. All the participants played in the space between talks.

It is important for Nimacloud to participate in these events because it brings us closer to the Latin American Salesforce community, among the information and the actualization always important for our continuous technical improvement.

Some pictures of our team in Punta Dreamin’18

Big Data, Big Objects

Recursos de la charla presentada en Punta Dreamin’ 18