070-475 無料問題集「Microsoft Design and Implement Big Data Analytics Solutions」

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the states goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to implement a new data warehouse.
You have the following information regarding the data warehouse:
* The first data files for the data warehouse will be available in a few days.
* Most queries that will be executed against the data warehouse are ad-hoc.
* The schemas of data files that will be loaded to the data warehouse change often.
* One month after the planned implementation, the data warehouse will contain 15 TB of data.
You need to recommend a database solution to support the planned implementation.
Solution: You recommend an Apache Spark system.
Does this meet the goal?

Users report that when they access data that is more than one year old from a dashboard, the response time is slow.
You need to resolve the issue that causes the slow response when visualizing older data. What should you do?

You plan to use Microsoft Azure IoT Hub to capture data from medical devices that contain sensors.
You need to ensure that each device has its own credentials. The solution must minimize the number of required privileges.
Which policy should you apply to the devices?

解説: (JPNTest メンバーにのみ表示されます)
Your company has several thousand sensors deployed.
You have a Microsoft Azure Stream Analytics job that receives two data streams Input1 and Input2 from an Azure event hub. The data streams are portioned by using a column named SensorName. Each sensor is identified by a field named SensorID.
You discover that Input2 is empty occasionally and the data from Input1 is ignored during the processing of the Stream Analytics job.
You need to ensure that the Stream Analytics job always processes the data from Input1.
How should you modify the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1: LEFT OUTER JOIN
LEFT OUTER JOIN specifies that all rows from the left table not meeting the join condition are included in the result set, and output columns from the other table are set to NULL in addition to all rows returned by the inner join.
Box 2: ON I1.SensorID= I2.SensorID
References: https://docs.microsoft.com/en-us/stream-analytics-query/join-azure-stream-analytics
You have a Microsoft Azure Stream Analytics solution.
You need to identify which types of windows must be used to group lite following types of events:
* Events that have random time intervals and are captured in a single fixed-size window
* Events that have random time intervals and are captured in overlapping windows Which window type should you identify for each event type? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1. A sliding Window
Box 2: A sliding Window
With a Sliding Window, the system is asked to logically consider all possible windows of a given length and output events for cases when the content of the window actually changes - that is, when an event entered or existed the window.
You have a Microsoft Azure HDInsight cluster for analytics workloads. You have a C# application on a local computer.
You plan to use Azure Data Factory to run the C# application in Azure.
You need to create a data factory that runs the C# application by using HDInsight.
In which order should you perform the actions? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order.
NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
正解:
You need to recommend a platform architecture for a big data solution that meets the following requirements:
Supports batch processing
Provides a holding area for a 3-petabyte (PB) dataset
Minimizes the development effort to implement the solution
Provides near real time relational querying across a multi-terabyte (TB) dataset Which two platform architectures should you include in the recommendation? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

正解:B、E 解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You use Microsoft Azure Data Factory to orchestrate data movements and data transformations within Azure.
You plan to monitor the data factory to ensure that all of the activity slices run successfully. You need to identify a solution to rerun failed slices. What should you do?

You need to create a query that identifies the trending topics.
How should you complete the query? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
正解:

Explanation

From scenario: Topics are considered to be trending if they generate many mentions in a specific country during a 15-minute time frame.
Box 1: TimeStamp
Azure Stream Analytics (ASA) is a cloud service that enables real-time processing over streams of data flowing in from devices, sensors, websites and other live systems. The stream-processing logic in ASA is expressed in a SQL-like query language with some added extensions such as windowing for performing temporal calculations.
ASA is a temporal system, so every event that flows through it has a timestamp. A timestamp is assigned automatically based on the event's arrival time to the input source but you can also access a timestamp in your event payload explicitly using TIMESTAMP BY:
SELECT * FROM SensorReadings TIMESTAMP BY time
Box 2: GROUP BY
Example: Generate an output event if the temperature is above 75 for a total of 5 seconds SELECT sensorId, MIN(temp) as temp FROM SensorReadings TIMESTAMP BY time GROUP BY sensorId, SlidingWindow(second, 5) HAVING MIN(temp) > 75 Box 3: SlidingWindow Windowing is a core requirement for stream processing applications to perform set-based operations like counts or aggregations over events that arrive within a specified period of time. ASA supports three types of windows: Tumbling, Hopping, and Sliding.
With a Sliding Window, the system is asked to logically consider all possible windows of a given length and output events for cases when the content of the window actually changes - that is, when an event entered or existed the window.
You have a Microsoft Azure Data Factory pipeline.
You discover that the pipeline fails to execute because data is missing.
You need to rerun the failure in the pipeline.
Which cmdlet should you use?

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡