070-475 無料問題集「Microsoft Design and Implement Big Data Analytics Solutions」

The settings used for slice processing are described in the following table.

If the slice processing fails, you need to identify the number of retries that will be performed before the slice execution status changes to failed.
How many retries should you identify?

You have a web application that generates several terabytes (TB) of financial documents each day. The application processes the documents in batches.
You need to store the documents in Microsoft Azure. The solution must ensure that a user can restore the previous version of a document.
Which type of storage should you use for the documents?

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the states goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Apache Spark system that contains 5 TB of data.
You need to write queries that analyze the data in the system. The queries must meet the following requirements:
* Use static data typing.
* Execute queries as quickly as possible.
* Have access to the latest language features.
Solution: You write the queries by using Scala.

You are automating the deployment of a Microsoft Azure Data Factory solution. The data factory will interact with a file stored in Azure Blob storage.
You need to use the REST API to create a linked service to interact with the file.
How should you complete the request body? To answer, drag the appropriate code elements to the correct locations. Each code may be used once, more than once, or not at all. You may need to drag the slit bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
正解:

Explanation
You are designing a solution that will use Apache HBase on Microsoft Azure HDInsight.
You need to design the row keys for the database to ensure that client traffic is directed over all of the nodes in the cluster.
What are two possible techniques that you can use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

正解:A、C 解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have a Microsoft Azure data factory.
You assign administrative roles to the users in the following table.

You discover that several new data factory instances were created.
You need to ensure that only User5 can create a new data factory instance.
Which two roles should you change? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

正解:B、D 解答を投票する
You are developing a solution to ingest data in real-time from manufacturing sensors. The data will be archived. The archived data might be monitored after it is written.
You need to recommend a solution to ingest and archive the sensor data. The solution must allow alerts to be sent to specific users as the data is ingested.
What should you include in the recommendation?

You are designing an application that will perform real-time processing by using Microsoft Azure Stream Analytics.
You need to identify the valid outputs of a Stream Analytics job.
What are three possible outputs that you can use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

正解:A、C、E 解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have the following Hive query.
CREATE TABLE UserVisits (username string, urlvisited string, time date); LOAD DATA INPATH 'wasb:///Logs' OVERWRITE INTO TABLE UserVisits; Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the script.
NOTE: Each correct selection is worth one point.
正解:

Explanation
You have an Apache Storm cluster.
The cluster will ingest data from a Microsoft Azure event hub.
The event hub has the characteristics described in the following table.

You are designing the Storm application topology.
You need to ingest data from all of the partitions. The solution must maximize the throughput of the data ingestion.
Which setting should you use?

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡