2020年5月19日

Thingworx Analytics Introduction 3/3: Time Series Data TTF Prediction


This is the 3rd article of series: Thingworx Analytics introduction.
1st article introduces Analytics Function and Modules.
2nd article introduces Non Time Series TTF prediction.

Environment: Thingworx Platform 8.5, Analytics Server 8.5, Analytics Extension 8.5.

In TTF prediction, Time Series Data model is better than Non Time Series Data model.
What’s Time Series Data model?
As per Machine Learning concept, the process of training model is to find the best matching parameters, and to use them to combine with features to calculate the result values.
For Non Time Series Data models, it’s not relevant between current values and previous values.
But for Time Series Data, during each time of calculation, it will check not only current values, but also previous values, and the previous values are time relevant.
There’re 2 important terms here: look back size, and data sampling frequency.
Look back size = numbers of data samples need to look back.
Data sampling frequency = the frequency of taking a new data sample.
Look back size * Data sampling frequency = the total numbers will be feed to run calculation, for which I call it Look back area, the data is queried from Value Stream, with maxItems = look back size, and startDate = current date – look back area.
For example, if look back size = 4, data sampling frequency = 1 minute, then look back area = 4 minutes.
Thingworx used Value Stream to log Time Series Data, we can run service of QueryProperyHistory to get historic data:


Some other notes for TTF prediction models:
• Always set useRedundancyFilter = true
• Always set useGoalHistory = false
• Always set lookahead = 1
• Training time is much longer than Non Time Series data

After the model is published, test of model requires to input not only current values of all features, but also previous values as defined in look back size, than click Add Row to add the new record.
When creating Analysis Events, please be noted that the Thing Properties of Inputs Mapping should be Logged, because only Logged Properties can be queried with historic values from Value Stream.
And for Results Mapping, if we bind the result to Thing Property, even if that Property is set Logged, somehow the update by Analytics will not be logged into Value Stream. We can create another Property, and bind it with Results Mapping, and sync it with the final Property you’re monitoring by Service or Subscription, and Log it into Value Stream. After that, we can track all of the historic changes with TimeSeriesChart.
Below charts show the comparison between Time Series Data models predicted TTFs and actual TTFs.


We can see that, compare to Non Time Series models, the prediction is much more accurate, and is faster to match the actual TTF curve.
Below table lists out the settings of all models:


Another note, to make sure to write all features into Value Stream in same moment in same record, we should use UpdatePropertyValues instead of SetPropertyValues.
Some codes for reference:
----------------------------------------------------------------------
// Use UpdatePropertyValues to make sure to update all attribute in the same moment

var params = {
    infoTableName : "InfoTable",
    dataShapeName : "NamedVTQ"
};

// CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(VSTestDataShape)
var tempDable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);

var time = new Date();
tempDable.AddRow({
       time: time,
    name: "F1",
    quality: "GOOD",
    value: 71
});

tempDable.AddRow({
       time: time,
    name: "F2",
    quality: "GOOD",
    value: 72
});

tempDable.AddRow({
       time: time,
    name: "F3",
    quality: "GOOD",
    value: 73
});

tempDable.AddRow({
       time: time,
    name: "F4",
    quality: "GOOD",
    value: 74
});

tempDable.AddRow({
       time: time,
    name: "F5",
    quality: "GOOD",
    value: 75
});

me.UpdatePropertyValues({
       values: tempDable /* INFOTABLE */
});

var result = tempDable;
----------------------------------------------------------------------



2020年5月18日

Thingworx Analytics Introduction 2/3: Non Time Series TTF Prediction


This is the 2nd article of series: Thingworx Analytics introduction.
1st article introduces Analytics Function and Modules.
3rd article introduces Time Series TTF prediction.

This article is for TTF(Time to Failure) prediction, it is quite useful in IoT applications.
Environment: Thingworx Platform 8.5, Analytics Server 8.5, Analytics Extension 8.5.

Step 1, configure Analytics setting.
Click Analytics icon >> Analytics Manager >> Analysis Providers, create a new Analysis Provider, Connector = TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.
Click Analytics Builder >> Setting, select and set Analytics Server.

Step 2, create Analysis Data.
We need to prepare a CSV format data file and a JSON format data type file.
For CSV file, its 1st line is its header, should be equivalent with definition in JSON file.
JSON file structure can refer to:
---------------------------------------------------
[
       {
              "fieldName": "s2",
              "values": null,
              "range": null,
              "dataType": "DOUBLE",
              "opType": "CONTINUOUS",
              "timeSamplingInterval": null,
              "isStatic": false
       },
       {
              "fieldName": "s3",
              "values": null,
              "range": null,
              "dataType": "DOUBLE",
              "opType": "CONTINUOUS",
              "timeSamplingInterval": null,
              "isStatic": false
       }
]
---------------------------------------------------
CSV file includes below information:
• goalField, for result checking.
• key parameters fields.
• Data filter field, e.g. we can use filed record_purpose to filter training data out of scoring data, while training data is for model training, and scoring data is for validation or test.
• If the dataset type is time series data, we need to create 2 additional fields to identify AssetID(sequence of failure) and Cycle(cycle number inside AssetID).

Here I used a public NASA dataset for training, download URL:
When data is ready, click Analytics Builder >> Data >> New..
Select CSV file and JSON file, check option of “Review uploaded metadata”.
For most parameter fields, the data type should be Continuous.
And for goal filed, data type should be Continuous or Boolean, in this example we use Continuous.
Click “Create Dataset”.
When it’s done, the newly created Data will be showed in Datasets list.
Select the data, and click Dataset >> View >> Filters >> New, to create filter.

Step 3, create Machine Learning model.
Click Analytics Builder >> Models >> New.
Select dataset, and select goal field and filter, and select excluded fields:



Click Advanced Model Configuration:


In Machine Learning, normally we split data into training set, validation set, and test set, with rate of 60%, 20%, 20%.
In this step, we can use Filter to filter out test data, and use Validation Holdout % to define percentage of validation, we can use default setting of 20%.
Learning Techniques includes possible Machine Learning algorithms, we can manually add algorithms and modify parameters.
For each new model, I’d suggest to training it with different algorithms, and test them to get the most accurate one.
For same algorithm, we can train it with different parameters.
Ensemble Technique is the mix method of multiple algorithms, we can try and test with different settings.
My training methods:


Click Submit to create model.
For small dataset, it requires little training time, but for time series data, it requires much longer training time.
For any dataset, the data size is bigger, the training time is longer.
When it’s done, we can find the new model in Analytics Builder >> Models.
Select the model and click View, then we can see the model information.
Select the model and click Publish, then we can publish it to Analytics Manager, we can see it in Analytics Manager >> Analytics Models, and a Test Page will pop up.

Step 4, test model.
The test page will pop up when model is published, and we can also access it by: Click Analytics Manager >> Analysis Models, select model, click View >> Test:


For causalTechnique, normally we set it as FULL_RANGE.
For goalField, input name of goal field.
Then input values of all features/sensors, and click Add Row. In case of time series data, we need to add multiple rows for it.
Select 1st row, click Set Parent Row, then click Submit Job.
System will calculate and get result based on model and input values.
It might take a few seconds for calculation.
Under below Results Data Shape, select AnalyticsServerConnector.xxx, click Refresh Job, then you can see the result value.
For more detail information, you can check from Analysis Jobs.

Step 5, set automatic calculation.
After model creation, we might monitor it for some time, to check the prediction with actual values.
Take the NASA data for example, we could use some of its data for comparison, you may refer to below steps.
Firstly, build a Data Table, load CSV data into it.
Then build a Thing, create attributes mapping with model features.
Then create a Timer, to read data from Data Table and write value into Thing timely, and use these data for test.
Click Analytics Manager >> Analysis Models, enable the newly created model.
Click Analysis Events >> New, set Source Type = Thing, set Source as newly created Thing, set Event = DataChange, set Property as trigger attribute.
Save and create new Event under Analysis Events, click Map Data >> Inputs Mapping, set Source Type = Thing, and bind model features with Thing Property.
Tips: model feature names are started with _, and that causalTechnique & goalField can use static values. So if we defined such features in Thing, we can use Map All in this step, which will map all features automatically.
Then click Results Mapping, bind the test result with Thing Property. Please be noted, system generated result will be updated, but will NOT be logged into Value Stream, so we need to build another Property, and use Service to sync the data and log to Value Stream eventually.
When Event is setup, system will monitor and trigger automatically, and output value by calling internal Analytics API.
For both manual test and Event calculation, we can see the details in Analysis Jobs.
My TTF Demo Mashup for reference:


Comparison of different models:


Some notes:
• If you didn’t’ install Analytics Platform, then Analytics will run job in Synchronous mode. That means if many jobs are submitted in same time, then only 1 job will be running in that moment, and the other jobs will be pending(state=waiting). To avoid waiting jobs, we can manually create multiple trigger properties, and use PAUSE in JavaScript to delay the time in purpose.
• If raw data has many features, we can build a smaller model with 2 features, and use it to train and test, to find issues in modeling, and to optimize logic in Data Table/Thing/Timer, then extend to all features.
• Be cautious of using Timer, because wrong operations might cause mass data in Value Stream, or generate lots of Waiting Jobs. Too many Waiting Jobs will block analysis and prediction. We can use Resource of TW.AnalysisServices.JobManagementServicesAPI.DeleteJobs to delete jobs by force, and we can check Job’s total number by running Data Table’s Service of TW.AnalysisServices.AnalysisDataTable.GetDataTableEntryCounts.



Thingworx Analytics Introduction 1/3:Analytics Function and Modules


This is the first article of series: Thingworx Analytics introduction.
2nd article introduces Non Time Series TTF(Time to Failure) prediction.
3rd article introduces Time Series TTF prediction.

In year 2015, PTC acquired Machine Learning company ColdLight, and integrate its products into Thingworx, and renamed to Thingworx Analytics.
PTC Thingworx Analytics Help Site lists its functions:
• Explanatory Analytics: such as to identify a signal’s value
• Descriptive Analytics: such as calculation of average value, median value, and standard deviation value.
• Model Generation: model creation of prediction, includes some popular Machine Learning algorithm.
• Predictive Scoring: to predict result value based on trained model and parameters.
• Prescriptive Scoring: to change result values by modifying model parameters.
• Confidence Models: convert prediction result into rates of value ranges.
• Anomaly Detection: to filter out abnormal signals by comparing high and low limits.
• Time Series Predictions: some signal are time series related, so for each time of prediction, besides checking current values, it will also check the values of previous cycles.
• Learners and Ensemble Techniques: Machine Learning algorithm and mixing method.

Analytics implemented these Machine Learning algorithm:
• Linear Regression
• Logistic Regression
• Decision Tree
• Neural Network
• Random Forest
• Gradient Boost

And let’s see Analytics modules.
Analytics has 3 installation modules: Analytics Server, Analytics Platform, Analytics Client(extension).
When Analytics Server is installed, it will add these services into Windows OS:
• twas-analytics-worker-1
• twas-analytics-worker-2
• twas-analytics-worker-3
• twas-async-ms
• twas-sync-ms
• twas-twx-adapter
• twas-zookeeper

And create these entities into Thingworx platform:
• StatisticalCalculationMicroserver, with some SPC calculation services.
• StatisticalMonitoringMicroserver, with some SPC distribution services.
• Some Thing entities with AnalyticsServer in names, can be found in Monitoring >> Remote Things, it can also be accessed by Thingworx Server API.
• Data Tables with prefix of TW.AnalysisService, they’re Database of Analytics.
• Resources with prefix of TW.AnalysisService, mapping with Analytics Builder functions.

Analytics Platform is an independent module, it can improve performance, and we can run basic prediction with it.
Analytics Client can be installed as an extension, Thingworx will add an Analytics icon after installation in left action bar.
Analytics action bar has 2 categories of functions: Builder and Manager. Builder is for model building, and Manager is for background handling.

2020年5月11日

从《巫师3》看游戏作为第九艺术

这段时间陆续地玩《巫师3》,被它深深地吸引,深刻地体会到游戏作为第九艺术的魅力。
《巫师3》是角色扮演类游戏,如同美剧一样,有很强的代入感,有很多另人感动的细节。
本文从文学性、戏剧性、美术性这三个方面来分析它的艺术性。

首先是文学性。
我们知道《巫师3》是根据同名奇幻小说改编的,小说的世界观非常宏大,人物刻画也非常丰满,而游戏也保留了这个特色,比如几乎每个NPC在互动时都有相符的台词。
而主要人物的性格和情节的发展也非常匹配。
有很多玩家津津乐道于游戏丰富的啪啪啪场景(有野合、独角兽震、船震、云震等),但是如果仔细分析的话,这些场景的设计也是巧妙且合适的。
以凯拉、特莉丝、叶奈法为例。
凯拉的场景发生在沼泽,野合万物兴。凯拉是杰洛特生命中的过客,不管情节怎么发展,他们最终都不会在一起。因此两人必然只是露水姻缘,如梦似幻。而凯拉也喜欢这种自在的方式。
特莉丝的场景发生在灯塔,在城市的边缘。特莉丝刚刚被房东赶出,也刚刚逃出女巫猎人的魔爪,因此两人都是城市边缘的浪子,但是灯塔的光带给他们温暖和希望,预示着他们可以在一起。
叶奈法就不同了,两次浪漫场景都发生在室内,在她的房导间,这说明在两人的关系中,她是主导者。

再来看戏剧性。
这可以从游戏中的两场戏中戏看出一斑。
一场戏是为了引出变形怪,专门编了一出剧,主题是巫师救变形怪,而对应的现实是巫师需要变形怪的帮助。
另一出戏是普西拉在翠鸟旅店演唱Wolfen Storm。这首歌的主题是杰洛特和叶奈法的爱情故事。这是完整的一首歌,在YOUTUBE上有几十种语言的演唱版本,游戏还专门制作了完整的MV,我们可以看到听众男默女泪,有很多微表情的特写。

再来看美术性。
我的感觉是,此游戏油画感很强,非常注重光线的运用。
我们看下面这个《林中夫人》的壁挂,非常鲜明的沃特豪斯的风格:


杰洛特在迷雾之岛的小屋里找到希里,发现她已经没了呼吸,这个场景分明让人想到圣母抱着十字架下的耶稣,而此后的希里苏醒也对应着耶稣复活:


最后我们再来看特莉丝在屋里,杰洛特慢慢走进她那昏暗的、阁楼里的小屋,这里的烛光、阴影、墙上的画、墙角的静物、斑驳的墙面,这一切都让人不由地想起伦勃朗的油画,而这一切都是实时渲染形成的


2020年4月29日

Thingworx Analytics介绍之三:Time Series Data建模预测



PTC的专家说,在TTF预测方面,Time Series Data模型要比标准的模型要好。
我在做Demo的过程中,也曾尝试建立Time Series Data模型,但是碰到很多坑,因此在前两篇文章中只介绍标准模型。
后来我有了PTC服务帐号,用此帐号看到了一些新的文档,才走通了这条路。

首先介绍一下什么是Time Series Data模型。
就机器学习的概念来说,训练模型的过程就是找到最合适的匹配参数,利用这些参数和变量相结合,从而计算得到输出结果。
标准的模型在训练和计算的时候,各变量的当前值和历史值没有相关性。
而对于Time Series Data模型,计算时不仅要看变量的当前值,还要看若干个历史值,而这些值是时间相关的。
Thingworx Analytics中,有两个重要的概念:look back sizedata sampling frequency
Look back size就是采样步长,data sampling frequency就是采样频率,两者乘积就是采样区间。
比如look back size=4, data sampling frequency=1 minute,那总采样区间就是4分钟。
Thingworx利用Value Stream记录time series data,我们可以利用QueryProperyHistory得到变量在采样区间的历史值:


此外,在建模的时候,还有几个参数要注意:
-       useRedundancyFilter=true
-       useGoalHistory=false
-       lookahead=1
训练时间会比标准模型长很多。

模型发布之后,测试时不仅要输入各变量的当前值,还要输入若干个历史值,然后点击Add Row
在创建Analyisi Events时,注意Inputs Mapping各参数对应的Thing Property必须Logged,因为只有Logged记录才能通过Value Stream查询历史值。
此外,Results Mapping如果绑定结果到Thing Property,即使此Property已经设置Logged,结果被Analytics更新时也不会记录到Value Stream。解决办法是另建一个Property,把它绑定到Results Mapping,然后通过ServiceSubscription把值同步给最终的Property,并将值记录在Value Stream。经过此处理后,预测值的所有历史记录就可以通过timeserieschart展示了。

下图是采用Time Series Data建模的预测结果和实际结果的对比:


我们可以看出,预测更加精确,并且更快地拟合。

下表是各模型的建模参数:


此外,关于Value Stream写入时事务性的保证,我也找到了解决办法:使用UpdatePropertyValues函数。
下面是参考代码:
----------------------------------------------------------------------
// Use UpdatePropertyValues to make sure to update all attribute in the same moment

var params = {
    infoTableName : "InfoTable",
    dataShapeName : "NamedVTQ"
};

// CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(VSTestDataShape)
var tempDable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);

var time = new Date();
tempDable.AddRow({
       time: time,
    name: "F1",
    quality: "GOOD",
    value: 71
});

tempDable.AddRow({
       time: time,
    name: "F2",
    quality: "GOOD",
    value: 72
});

tempDable.AddRow({
       time: time,
    name: "F3",
    quality: "GOOD",
    value: 73
});

tempDable.AddRow({
       time: time,
    name: "F4",
    quality: "GOOD",
    value: 74
});

tempDable.AddRow({
       time: time,
    name: "F5",
    quality: "GOOD",
    value: 75
});

me.UpdatePropertyValues({
       values: tempDable /* INFOTABLE */
});

var result = tempDable;
----------------------------------------------------------------------



2020年4月17日

Thingworx Analytics介绍之二:TTF预测实操


本文介绍一个IoT领域较为关注的应用:机器失效时间(Time To Failure)预测。
系统环境:Thingworx Platform 8.5Analytics Server 8.5Analytics Extension 8.5

1步,配置Analytics参数。
点击Analytics图标>>Analytics Manager>>Analysis Providers,新建一个Analysis ProviderConnector类型为TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector
点击Analytics Builder>>Setting,设置Analytics Server

2步,创建分析数据。
首先要准备好CSV格式的数据文件和JSON格式的数据类型说明文件。
CSV文件的第一行是表头,应该和JSON文件保持一致。
JSON文件结构可参考:
---------------------------------------------------
[
       {
              "fieldName": "s2",
              "values": null,
              "range": null,
              "dataType": "DOUBLE",
              "opType": "CONTINUOUS",
              "timeSamplingInterval": null,
              "isStatic": false
       },
       {
              "fieldName": "s3",
              "values": null,
              "range": null,
              "dataType": "DOUBLE",
              "opType": "CONTINUOUS",
              "timeSamplingInterval": null,
              "isStatic": false
       }
]
---------------------------------------------------
CSV文件已包含以下信息:
1)     用于判定结果的字段(goalField)
2)     关键参数字段。
3)     用于过滤数据的字段,如建立一个字段record_purpose,部分数据值为training,部分数据值为scoring,前者用于训练模型,后者用于分析模型的准确性。
4)    如果是time series data,还要建立两个字段,分别表示训练次数和训练内周期。
按照原理来说,机器TTFtime series建模更为合适,但是由于Thingworx本身机制的原因(利用Value Stream存储time series data,但是无法保证事务性),本人实测在8.5试用版无法适用,因此本文不采用time series
本文采用了一个NASA的公开数据集进行训练,下载地址:
https://c3.nasa.gov/dashlink/resources/139/
数据准备好以后,点击Analytics Builder>>Data>>New..
选择CSV文件和JSON文件,然后在“Review uploaded metadata”,以便再次核对数据类型。
一般的参数字段类型为Continuous,结果字段是ContinuousBoolean,本例中为Continuous
用于筛选的字段,其类型为Informational
点击Create Dataset
创建成功后,新建Data会显示在Datasets列表中。
选择新建的Dataset>>View>>Filters>>New,可创建过滤器。

3步,创建机器学习模型。
点击Analytics Builder>>Models>>New
选择Dataset,然后选择结果字段和过滤器,此外还可以选择排除字段,以减少干扰提高效率:


点击Advanced Model Configuration,可配置高级参数:


在机器学习中,通常我们将数据按用途分成训练集、验证集、测试集3大部分,比例可设置为60%20%20%
在此步骤,我们可利用过滤器排除测试用数据,然后可用Validation Holdout %来定义验证数据比例,默认为20%
Learning Techniques包含了可用的机器学习算法,我们还可以手动添加算法及修改参数。
我的建议是用多种算法各训练一遍,然后用测试数据来分析各算法的匹配度。
同一种算法还可以利用不同的参数进行多次训练。
Ensemble Technique是多种算法的混合方式,也可以用多种方式进行尝试。
我的训练方法:


 其实机器学习工程师或者所谓数据科学家的工作,很大一部分就在于:选择算法、调整参数、分析结果。
点击Submit就可以创建模型。
通常小型的数据集,训练时间比较短,但是time series data训练时间会长得多,此外数据集数据量越大则训练时间越长。
成功后在Analytics Builder>>Models可以看到新建的模型。
选中模型后点击Publish,成功后可以在Analytics Manager>>Analytics Models看到此模型,并自动进入测试页面。

4步,模型初步测试。
模型在创建时会自动进入测试页面,我们也可以Analytics Manager>>Analysis Models选中模型后,点击View>>Test进入测试页面:


causalTechnique一般设为FULL_RANGE
goalField即用于判定结果的字段。
然后输入各参数的值,之后点击Add Row。如果是time series data,则要输入多个值。
选择第一行数据,点击Set Parent Row,然后点击Submit Job
系统会根据输入数据,结合算法模型,计算得到结果。
约数秒后,可得到结果。
在下方的Results Data Shape中,选择AnalyticsServerConnector.xxx,点击Refresh Job,可以看到结果。
此项计算的更多信息可以在Analysis Jobs中查看。

5步,设置模型自动计算。
模型建立以后,我们可能还会继续观察一段时间,用预测结果比较实际值,从而对模型的准确度有更深入的认识。
以本文的NASA数据集为例,我们可利用其部分数据进行比较,具体如下。
首先建立一个DataTable,把CSV中的数据导入。
然后建立一个Thing,把各参数值作为Property进行更新。
然后建立一个Timer,把测试数据定期读入数据,用这些值进行测试。
点击Analytics Manager>>Analysis Models,把新建的Model Enable
点击Analysis Events>>NewSource Type=ThingSource为新建的ThingEvent=DataChageProperty设置为触发字段。
保存后在Analysis Events中选择新建的Event,点击Map Data>>Inputs MappingSource Type=Thing,把模型的参数与Thing Property进行绑定。
Tips:模型的参数以_开头,此外还有causalTechniquegoalField,如果我们在Thing中定义了这些参数,则可以在此步骤中使用Map All,则系统会自动映射名称匹配的参数。
然后点击Results Mapping,把测试结果和Thing Property绑定。此处需注意,由系统算法计算得到的值会得到更新,但是不会记录到Value Stream中,所以我们必须另建一个Property,通过Service把结果同步过来。
配置好Event后,系统会自动监控触发条件,一旦符合则调用Analytics API自动计算得到结果,然后予以输出。
手动测试的结果和Event的结果都可以在Analysis Jobs中查看。

下图是我做TTF Demo Mashup界面:


下图是不同模型的分析对比:


下图罗列几个坑点和注意事项:
1)    由于time series data采用Value Stream记录历史值,而Value Stream无法保证多个Property同时更新时的事务性,因此不能利用Event来进行实时预测。
2)    由于Analytics采用同步执行,如果要进行多个模型的对比分析,在设置Event的触发条件时,务必要保证多个Event不能在同一个时间点发生,不然会有大量的WAITING JOB。我的办法是,复制得到多个触发字段,然后用PAUSE指令进行延时。
3)    如果原始数据有多个参数,可先建一个小模型,只选用2个字段,然后进行训练、测试,用此模型发现建模中的问题,优化Data TableThingTimer的逻辑,没有问题后再将模型扩展到更多字段,这样会顺利很多。
4)    慎用Timer,操作不慎会导致Value Stream数据大量增加,或者产生大量WAITING JOBWAITING JOB太多会堵塞分析和预测。可以通过Resource TW.AnalysisServices.JobManagementServicesAPI.DeleteJobs强制删除Jobs,可以通过Data Table TW.AnalysisServices.AnalysisDataTable.GetDataTableEntryCounts查看Job数量。