070-475 無料問題集「Microsoft Design and Implement Big Data Analytics Solutions」
You need to create a query that identifies the trending topics.
How should you complete the query? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
How should you complete the query? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
正解:
Explanation
From scenario: Topics are considered to be trending if they generate many mentions in a specific country during a 15-minute time frame.
Box 1: TimeStamp
Azure Stream Analytics (ASA) is a cloud service that enables real-time processing over streams of data flowing in from devices, sensors, websites and other live systems. The stream-processing logic in ASA is expressed in a SQL-like query language with some added extensions such as windowing for performing temporal calculations.
ASA is a temporal system, so every event that flows through it has a timestamp. A timestamp is assigned automatically based on the event's arrival time to the input source but you can also access a timestamp in your event payload explicitly using TIMESTAMP BY:
SELECT * FROM SensorReadings TIMESTAMP BY time
Box 2: GROUP BY
Example: Generate an output event if the temperature is above 75 for a total of 5 seconds SELECT sensorId, MIN(temp) as temp FROM SensorReadings TIMESTAMP BY time GROUP BY sensorId, SlidingWindow(second, 5) HAVING MIN(temp) > 75 Box 3: SlidingWindow Windowing is a core requirement for stream processing applications to perform set-based operations like counts or aggregations over events that arrive within a specified period of time. ASA supports three types of windows: Tumbling, Hopping, and Sliding.
With a Sliding Window, the system is asked to logically consider all possible windows of a given length and output events for cases when the content of the window actually changes - that is, when an event entered or existed the window.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has multiple databases that contain millions of sales transactions.
You plan to implement a data mining solution to identity purchasing fraud.
You need to design a solution that mines 10 terabytes (TB) of sales data. The solution must meet the following requirements:
* Run the analysis to identify fraud once per week.
* Continue to receive new sales transactions while the analysis runs.
* Be able to stop computing services when the analysis is NOT running.
Solution: You create a Cloudera Hadoop cluster on Microsoft Azure virtual machines.
Does this meet the goal?
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has multiple databases that contain millions of sales transactions.
You plan to implement a data mining solution to identity purchasing fraud.
You need to design a solution that mines 10 terabytes (TB) of sales data. The solution must meet the following requirements:
* Run the analysis to identify fraud once per week.
* Continue to receive new sales transactions while the analysis runs.
* Be able to stop computing services when the analysis is NOT running.
Solution: You create a Cloudera Hadoop cluster on Microsoft Azure virtual machines.
Does this meet the goal?
正解:A
解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have data in an on-premises Microsoft SQL Server database.
You must ingest the data in Microsoft Azure Blob storage from the on-premises SQL Server database by using Azure Data Factory.
You need to identify which tasks must be performed from Azure.
In which sequence should you perform the actions? To answer, move all of the actions from the list of actions to the answer area and arrange them in the correct order.
NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
You must ingest the data in Microsoft Azure Blob storage from the on-premises SQL Server database by using Azure Data Factory.
You need to identify which tasks must be performed from Azure.
In which sequence should you perform the actions? To answer, move all of the actions from the list of actions to the answer area and arrange them in the correct order.
NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
正解:
Explanation
Step 1: Configure a Microsoft Data Management Gateway
Install and configure Azure Data Factory Integration Runtime.
The Integration Runtime is a customer managed data integration infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments. This runtime was formerly called
"Data Management Gateway".
Step 2: Create a linked service for Azure Blob storage
Create an Azure Storage linked service (destination/sink). You link your Azure storage account to the data factory.
Step 3: Create a linked service for SQL Server
Create and encrypt a SQL Server linked service (source)
In this step, you link your on-premises SQL Server instance to the data factory.
Step 4: Create an input dataset and an output dataset.
Create a dataset for the source SQL Server database. In this step, you create input and output datasets. They represent input and output data for the copy operation, which copies data from the on-premises SQL Server database to Azure Blob storage.
Step 5: Create a pipeline..
You create a pipeline with a copy activity. The copy activity uses SqlServerDataset as the input dataset and AzureBlobDataset as the output dataset. The source type is set to SqlSource and the sink type is set to BlobSink.
References: https://docs.microsoft.com/en-us/azure/data-factory/tutorial-hybrid-copy-powershell
You are designing a solution that will use Apache HBase on Microsoft Azure HDInsight.
You need to design the row keys for the database to ensure that client traffic is directed over all of the nodes in the cluster.
What are two possible techniques that you can use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
You need to design the row keys for the database to ensure that client traffic is directed over all of the nodes in the cluster.
What are two possible techniques that you can use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
正解:A、C
解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have a Microsoft Azure Data Factory that loads data to an analytics solution.
You receive an alert that an error occurred during the last processing of a data stream.
You debug the problem and solve an error.
You need to process the data stream that caused the error.
What should you do?
You receive an alert that an error occurred during the last processing of a data stream.
You debug the problem and solve an error.
You need to process the data stream that caused the error.
What should you do?
正解:D
解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You need to ingest data from various data stores into a Microsoft Azure SQL data warehouse by using PolyBase.
You create an Azure Data Factory.
Which three components should you create next? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You create an Azure Data Factory.
Which three components should you create next? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
正解:B、D
解答を投票する
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy a Microsoft Azure SQL data warehouse and a web application.
The data warehouse will ingest 5 TB of data from an on-premises Microsoft SQL Server database daily. The web application will query the data warehouse.
You need to design a solution to ingest data into the data warehouse.
Solution: You use the bcp utility to export CSV files from SQL Server and then to import the files to Azure SQL Data Warehouse.
Does this meet the goal?
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy a Microsoft Azure SQL data warehouse and a web application.
The data warehouse will ingest 5 TB of data from an on-premises Microsoft SQL Server database daily. The web application will query the data warehouse.
You need to design a solution to ingest data into the data warehouse.
Solution: You use the bcp utility to export CSV files from SQL Server and then to import the files to Azure SQL Data Warehouse.
Does this meet the goal?
正解:B
解答を投票する
解説: (JPNTest メンバーにのみ表示されます)
You have an application that displays data from a Microsoft Azure SQL database. The database contains credit card numbers.
You need to ensure that the application only displays the last four digits of each credit card number when a credit card number is returned from a query. The solution must NOT require any changes to the data in the database.
What should you use?
You need to ensure that the application only displays the last four digits of each credit card number when a credit card number is returned from a query. The solution must NOT require any changes to the data in the database.
What should you use?
正解:B
解答を投票する
Your company has two Microsoft Azure SQL databases named db1 and db2.
You need to move data from a table in db1 to a table in db2 by using a pipeline in Azure Data Factory.
You create an Azure Data Factory named ADF1.
Which two types Of objects Should you create In ADF1 to complete the pipeline? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You need to move data from a table in db1 to a table in db2 by using a pipeline in Azure Data Factory.
You create an Azure Data Factory named ADF1.
Which two types Of objects Should you create In ADF1 to complete the pipeline? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
正解:C、E
解答を投票する
解説: (JPNTest メンバーにのみ表示されます)