prefect
Flow
Bases: Generic[P, R]
A Prefect workflow definition.
Note
We recommend using the @flow
decorator for most use-cases.
Wraps a function with an entrypoint to the Prefect engine. To preserve the input
and output types, we use the generic type variables P
and R
for "Parameters" and
"Returns" respectively.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fn
|
Callable[P, R]
|
The function defining the workflow. |
required |
name
|
Optional[str]
|
An optional name for the flow; if not provided, the name will be inferred from the given function. |
None
|
version
|
Optional[str]
|
An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null. |
None
|
flow_run_name
|
Optional[Union[Callable[[], str], str]]
|
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string. |
None
|
task_runner
|
Union[Type[TaskRunner], TaskRunner, None]
|
An optional task runner to use for task execution within the flow;
if not provided, a |
None
|
description
|
Optional[str]
|
An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function. |
None
|
timeout_seconds
|
Union[int, float, None]
|
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. |
None
|
validate_parameters
|
bool
|
By default, parameters passed to flows are validated by
Pydantic. This will check that input values conform to the annotated types
on the function. Where possible, values will be coerced into the correct
type; for example, if a parameter is defined as |
True
|
retries
|
Optional[int]
|
An optional number of times to retry on flow run failure. |
None
|
retry_delay_seconds
|
Optional[Union[int, float]]
|
An optional number of seconds to wait before retrying the
flow after failure. This is only applicable if |
None
|
persist_result
|
Optional[bool]
|
An optional toggle indicating whether the result of this flow
should be persisted to result storage. Defaults to |
None
|
result_storage
|
Optional[Union[ResultStorage, str]]
|
An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow. |
None
|
result_serializer
|
Optional[ResultSerializer]
|
An optional serializer to use to serialize the result of this
flow for persistence. This value will be used as the default for any tasks
in this flow. If not provided, the value of |
None
|
on_failure
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of callables to run when the flow enters a failed state. |
None
|
on_completion
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of callables to run when the flow enters a completed state. |
None
|
on_cancellation
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of callables to run when the flow enters a cancelling state. |
None
|
on_crashed
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of callables to run when the flow enters a crashed state. |
None
|
on_running
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of callables to run when the flow enters a running state. |
None
|
Source code in src/prefect/flows.py
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 |
|
__call__(*args, return_state=False, wait_for=None, **kwargs)
__call__(*args: P.args, **kwargs: P.kwargs) -> None
__call__(*args: P.args, **kwargs: P.kwargs) -> Coroutine[Any, Any, T]
__call__(*args: P.args, **kwargs: P.kwargs) -> T
__call__(*args: P.args, return_state: Literal[True], **kwargs: P.kwargs) -> Awaitable[State[T]]
__call__(*args: P.args, return_state: Literal[True], **kwargs: P.kwargs) -> State[T]
Run the flow and return its result.
Flow parameter values must be serializable by Pydantic.
If writing an async flow, this call must be awaited.
This will create a new flow run in the API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args
|
args
|
Arguments to run the flow with. |
()
|
return_state
|
bool
|
Return a Prefect State containing the result of the flow run. |
False
|
wait_for
|
Optional[Iterable[PrefectFuture]]
|
Upstream task futures to wait for before starting the flow if called as a subflow |
None
|
**kwargs
|
kwargs
|
Keyword arguments to run the flow with. |
{}
|
Returns:
Type | Description |
---|---|
If |
|
If |
Define a flow
>>> @flow
>>> def my_flow(name):
>>> print(f"hello {name}")
>>> return f"goodbye {name}"
Run a flow
>>> my_flow("marvin")
hello marvin
"goodbye marvin"
Run a flow with additional tags
>>> from prefect import tags
>>> with tags("db", "blue"):
>>> my_flow("foo")
Source code in src/prefect/flows.py
1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 |
|
__get__(instance, owner)
Implement the descriptor protocol so that the flow can be used as an instance method. When an instance method is loaded, this method is called with the "self" instance as an argument. We return a copy of the flow with that instance bound to the flow's function.
Source code in src/prefect/flows.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 |
|
deploy(name, work_pool_name=None, image=None, build=True, push=True, work_queue_name=None, job_variables=None, interval=None, cron=None, rrule=None, paused=None, schedules=None, concurrency_limit=None, triggers=None, parameters=None, description=None, tags=None, version=None, enforce_parameter_schema=True, entrypoint_type=EntrypointType.FILE_PATH, print_next_steps=True, ignore_warnings=False)
async
Deploys a flow to run on dynamic infrastructure via a work pool.
By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule.
If you want to use an existing image, you can pass build=False
to skip building and pushing
an image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
The name to give the created deployment. |
required |
work_pool_name
|
Optional[str]
|
The name of the work pool to use for this deployment. Defaults to
the value of |
None
|
image
|
Optional[Union[str, DockerImage]]
|
The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. |
None
|
build
|
bool
|
Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. |
True
|
push
|
bool
|
Whether or not to skip pushing the built image to a registry. |
True
|
work_queue_name
|
Optional[str]
|
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. |
None
|
job_variables
|
Optional[dict]
|
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. |
None
|
interval
|
Optional[Union[int, float, timedelta]]
|
An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules. |
None
|
cron
|
Optional[str]
|
A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules. |
None
|
rrule
|
Optional[str]
|
An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules. |
None
|
triggers
|
Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]
|
A list of triggers that will kick off runs of this deployment. |
None
|
paused
|
Optional[bool]
|
Whether or not to set this deployment as paused. |
None
|
schedules
|
Optional[List[DeploymentScheduleCreate]]
|
A list of schedule objects defining when to execute runs of this deployment.
Used to define multiple schedules or additional scheduling options like |
None
|
concurrency_limit
|
Optional[Union[int, ConcurrencyLimitConfig, None]]
|
The maximum number of runs that can be executed concurrently. |
None
|
parameters
|
Optional[dict]
|
A dictionary of default parameter values to pass to runs of this deployment. |
None
|
description
|
Optional[str]
|
A description for the created deployment. Defaults to the flow's description if not provided. |
None
|
tags
|
Optional[List[str]]
|
A list of tags to associate with the created deployment for organizational purposes. |
None
|
version
|
Optional[str]
|
A version for the created deployment. Defaults to the flow's version. |
None
|
enforce_parameter_schema
|
bool
|
Whether or not the Prefect API should enforce the parameter schema for the created deployment. |
True
|
entrypoint_type
|
EntrypointType
|
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. |
FILE_PATH
|
print_next_steps_message
|
Whether or not to print a message with next steps after deploying the deployments. |
required | |
ignore_warnings
|
bool
|
Whether or not to ignore warnings about the work pool type. |
False
|
Returns:
Type | Description |
---|---|
UUID
|
The ID of the created/updated deployment. |
Examples:
Deploy a local flow to a work pool:
from prefect import flow
@flow
def my_flow(name):
print(f"hello {name}")
if __name__ == "__main__":
my_flow.deploy(
"example-deployment",
work_pool_name="my-work-pool",
image="my-repository/my-image:dev",
)
Deploy a remotely stored flow to a work pool:
from prefect import flow
if __name__ == "__main__":
flow.from_source(
source="https://github.com/org/repo.git",
entrypoint="flows.py:my_flow",
).deploy(
"example-deployment",
work_pool_name="my-work-pool",
image="my-repository/my-image:dev",
)
Source code in src/prefect/flows.py
1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 |
|
from_source(source, entrypoint)
async
classmethod
Loads a flow from a remote source.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
source
|
Union[str, RunnerStorage, ReadableDeploymentStorage]
|
Either a URL to a git repository or a storage object. |
required |
entrypoint
|
str
|
The path to a file containing a flow and the name of the flow function in
the format |
required |
Returns:
Type | Description |
---|---|
Flow[P, R]
|
A new |
Examples:
Load a flow from a public git repository:
from prefect import flow
from prefect.runner.storage import GitRepository
from prefect.blocks.system import Secret
my_flow = flow.from_source(
source="https://github.com/org/repo.git",
entrypoint="flows.py:my_flow",
)
my_flow()
Load a flow from a private git repository using an access token stored in a Secret
block:
from prefect import flow
from prefect.runner.storage import GitRepository
from prefect.blocks.system import Secret
my_flow = flow.from_source(
source=GitRepository(
url="https://github.com/org/repo.git",
credentials={"access_token": Secret.load("github-access-token")}
),
entrypoint="flows.py:my_flow",
)
my_flow()
Load a flow from a local directory:
# from_local_source.py
from pathlib import Path
from prefect import flow
@flow(log_prints=True)
def my_flow(name: str = "world"):
print(f"Hello {name}! I'm a flow from a Python script!")
if __name__ == "__main__":
my_flow.from_source(
source=str(Path(__file__).parent),
entrypoint="from_local_source.py:my_flow",
).deploy(
name="my-deployment",
parameters=dict(name="Marvin"),
work_pool_name="local",
)
Source code in src/prefect/flows.py
938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 |
|
serialize_parameters(parameters)
Convert parameters to a serializable form.
Uses FastAPI's jsonable_encoder
to convert to JSON compatible objects without
converting everything directly to a string. This maintains basic types like
integers during API roundtrips.
Source code in src/prefect/flows.py
601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 |
|
serve(name=None, interval=None, cron=None, rrule=None, paused=None, schedules=None, global_limit=None, triggers=None, parameters=None, description=None, tags=None, version=None, enforce_parameter_schema=True, pause_on_shutdown=True, print_starting_message=True, limit=None, webserver=False, entrypoint_type=EntrypointType.FILE_PATH)
Creates a deployment for this flow and starts a runner to monitor for scheduled work.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
Optional[str]
|
The name to give the created deployment. Defaults to the name of the flow. |
None
|
interval
|
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
|
An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules. |
None
|
cron
|
Optional[Union[Iterable[str], str]]
|
A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules. |
None
|
rrule
|
Optional[Union[Iterable[str], str]]
|
An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules. |
None
|
triggers
|
Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]
|
A list of triggers that will kick off runs of this deployment. |
None
|
paused
|
Optional[bool]
|
Whether or not to set this deployment as paused. |
None
|
schedules
|
Optional[FlexibleScheduleList]
|
A list of schedule objects defining when to execute runs of this deployment.
Used to define multiple schedules or additional scheduling options like |
None
|
global_limit
|
Optional[Union[int, ConcurrencyLimitConfig, None]]
|
The maximum number of concurrent runs allowed across all served flow instances associated with the same deployment. |
None
|
parameters
|
Optional[dict]
|
A dictionary of default parameter values to pass to runs of this deployment. |
None
|
description
|
Optional[str]
|
A description for the created deployment. Defaults to the flow's description if not provided. |
None
|
tags
|
Optional[List[str]]
|
A list of tags to associate with the created deployment for organizational purposes. |
None
|
version
|
Optional[str]
|
A version for the created deployment. Defaults to the flow's version. |
None
|
enforce_parameter_schema
|
bool
|
Whether or not the Prefect API should enforce the parameter schema for the created deployment. |
True
|
pause_on_shutdown
|
bool
|
If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running. |
True
|
print_starting_message
|
bool
|
Whether or not to print the starting message when flow is served. |
True
|
limit
|
Optional[int]
|
The maximum number of runs that can be executed concurrently by the created runner; only applies to this served flow. To apply a limit across multiple served flows, use |
None
|
webserver
|
bool
|
Whether or not to start a monitoring webserver for this flow. |
False
|
entrypoint_type
|
EntrypointType
|
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. |
FILE_PATH
|
Examples:
Serve a flow:
from prefect import flow
@flow
def my_flow(name):
print(f"hello {name}")
if __name__ == "__main__":
my_flow.serve("example-deployment")
Serve a flow and run it every hour:
from prefect import flow
@flow
def my_flow(name):
print(f"hello {name}")
if __name__ == "__main__":
my_flow.serve("example-deployment", interval=3600)
Source code in src/prefect/flows.py
787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 |
|
to_deployment(name, interval=None, cron=None, rrule=None, paused=None, schedules=None, concurrency_limit=None, parameters=None, triggers=None, description=None, tags=None, version=None, enforce_parameter_schema=True, work_pool_name=None, work_queue_name=None, job_variables=None, entrypoint_type=EntrypointType.FILE_PATH)
async
Creates a runner deployment object for this flow.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
The name to give the created deployment. |
required |
interval
|
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
|
An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. |
None
|
cron
|
Optional[Union[Iterable[str], str]]
|
A cron schedule of when to execute runs of this deployment. |
None
|
rrule
|
Optional[Union[Iterable[str], str]]
|
An rrule schedule of when to execute runs of this deployment. |
None
|
paused
|
Optional[bool]
|
Whether or not to set this deployment as paused. |
None
|
schedules
|
Optional[FlexibleScheduleList]
|
A list of schedule objects defining when to execute runs of this deployment.
Used to define multiple schedules or additional scheduling options such as |
None
|
concurrency_limit
|
Optional[Union[int, ConcurrencyLimitConfig, None]]
|
The maximum number of runs of this deployment that can run at the same time. |
None
|
parameters
|
Optional[dict]
|
A dictionary of default parameter values to pass to runs of this deployment. |
None
|
triggers
|
Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]
|
A list of triggers that will kick off runs of this deployment. |
None
|
description
|
Optional[str]
|
A description for the created deployment. Defaults to the flow's description if not provided. |
None
|
tags
|
Optional[List[str]]
|
A list of tags to associate with the created deployment for organizational purposes. |
None
|
version
|
Optional[str]
|
A version for the created deployment. Defaults to the flow's version. |
None
|
enforce_parameter_schema
|
bool
|
Whether or not the Prefect API should enforce the parameter schema for the created deployment. |
True
|
work_pool_name
|
Optional[str]
|
The name of the work pool to use for this deployment. |
None
|
work_queue_name
|
Optional[str]
|
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. |
None
|
job_variables
|
Optional[Dict[str, Any]]
|
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for |
None
|
entrypoint_type
|
EntrypointType
|
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. |
FILE_PATH
|
Examples:
Prepare two deployments and serve them:
from prefect import flow, serve
@flow
def my_flow(name):
print(f"hello {name}")
@flow
def my_other_flow(name):
print(f"goodbye {name}")
if __name__ == "__main__":
hello_deploy = my_flow.to_deployment("hello", tags=["dev"])
bye_deploy = my_other_flow.to_deployment("goodbye", tags=["dev"])
serve(hello_deploy, bye_deploy)
Source code in src/prefect/flows.py
630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 |
|
validate_parameters(parameters)
Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations.
Returns:
Type | Description |
---|---|
Dict[str, Any]
|
A new dict of parameters that have been cast to the appropriate types |
Raises:
Type | Description |
---|---|
ParameterTypeError
|
if the provided parameters are not valid |
Source code in src/prefect/flows.py
524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 |
|
visualize(*args, **kwargs)
async
Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG.
Raises:
Type | Description |
---|---|
-ImportError
|
If |
-GraphvizExecutableNotFoundError
|
If the |
-FlowVisualizationError
|
If the flow can't be visualized for any other reason. |
Source code in src/prefect/flows.py
1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 |
|
with_options(*, name=None, version=None, retries=None, retry_delay_seconds=None, description=None, flow_run_name=None, task_runner=None, timeout_seconds=None, validate_parameters=None, persist_result=NotSet, result_storage=NotSet, result_serializer=NotSet, cache_result_in_memory=None, log_prints=NotSet, on_completion=None, on_failure=None, on_cancellation=None, on_crashed=None, on_running=None)
Create a new flow from the current object, updating provided options.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
Optional[str]
|
A new name for the flow. |
None
|
version
|
Optional[str]
|
A new version for the flow. |
None
|
description
|
Optional[str]
|
A new description for the flow. |
None
|
flow_run_name
|
Optional[Union[Callable[[], str], str]]
|
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string. |
None
|
task_runner
|
Union[Type[TaskRunner], TaskRunner, None]
|
A new task runner for the flow. |
None
|
timeout_seconds
|
Union[int, float, None]
|
A new number of seconds to fail the flow after if still running. |
None
|
validate_parameters
|
Optional[bool]
|
A new value indicating if flow calls should validate given parameters. |
None
|
retries
|
Optional[int]
|
A new number of times to retry on flow run failure. |
None
|
retry_delay_seconds
|
Optional[Union[int, float]]
|
A new number of seconds to wait before retrying the
flow after failure. This is only applicable if |
None
|
persist_result
|
Optional[bool]
|
A new option for enabling or disabling result persistence. |
NotSet
|
result_storage
|
Optional[ResultStorage]
|
A new storage type to use for results. |
NotSet
|
result_serializer
|
Optional[ResultSerializer]
|
A new serializer to use for results. |
NotSet
|
cache_result_in_memory
|
Optional[bool]
|
A new value indicating if the flow's result should be cached in memory. |
None
|
on_failure
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
A new list of callables to run when the flow enters a failed state. |
None
|
on_completion
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
A new list of callables to run when the flow enters a completed state. |
None
|
on_cancellation
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
A new list of callables to run when the flow enters a cancelling state. |
None
|
on_crashed
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
A new list of callables to run when the flow enters a crashed state. |
None
|
on_running
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
A new list of callables to run when the flow enters a running state. |
None
|
Returns:
Type | Description |
---|---|
Self
|
A new |
Create a new flow from an existing flow and update the name:
>>> @flow(name="My flow")
>>> def my_flow():
>>> return 1
>>>
>>> new_flow = my_flow.with_options(name="My new flow")
Create a new flow from an existing flow, update the task runner, and call
it without an intermediate variable:
>>> from prefect.task_runners import ThreadPoolTaskRunner
>>>
>>> @flow
>>> def my_flow(x, y):
>>> return x + y
>>>
>>> state = my_flow.with_options(task_runner=ThreadPoolTaskRunner)(1, 3)
>>> assert state.result() == 4
Source code in src/prefect/flows.py
395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 |
|
State
Bases: ObjectBaseModel
, Generic[R]
The state of a run.
Source code in src/prefect/client/schemas/objects.py
197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 |
|
__repr__()
Generates a complete state representation appropriate for introspection and debugging, including the result:
MyCompletedState(message="my message", type=COMPLETED, result=...)
Source code in src/prefect/client/schemas/objects.py
421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 |
|
__str__()
Generates a simple state representation appropriate for logging:
MyCompletedState("my message", type=COMPLETED)
Source code in src/prefect/client/schemas/objects.py
438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 |
|
default_name_from_type()
If a name is not provided, use the type
Source code in src/prefect/client/schemas/objects.py
343 344 345 346 347 348 349 350 351 |
|
fresh_copy(**kwargs)
Return a fresh copy of the state with a new ID.
Source code in src/prefect/client/schemas/objects.py
407 408 409 410 411 412 413 414 415 416 417 418 419 |
|
model_copy(*, update=None, deep=False)
Copying API models should return an object that could be inserted into the database again. The 'timestamp' is reset using the default factory.
Source code in src/prefect/client/schemas/objects.py
396 397 398 399 400 401 402 403 404 405 |
|
result(raise_on_failure=True, fetch=True, retry_result_failure=True)
result(raise_on_failure: bool = True) -> R
result(raise_on_failure: bool = False) -> Union[R, Exception]
Retrieve the result attached to this state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
raise_on_failure
|
bool
|
a boolean specifying whether to raise an exception
if the state is of type |
True
|
fetch
|
bool
|
a boolean specifying whether to resolve references to persisted
results into data. For synchronous users, this defaults to |
True
|
retry_result_failure
|
bool
|
a boolean specifying whether to retry on failures to load the result from result storage |
True
|
Raises:
Type | Description |
---|---|
TypeError
|
If the state is failed but the result is not an exception. |
Returns:
Type | Description |
---|---|
Union[R, Exception]
|
The result of the run |
Examples:
Get the result from a flow state
>>> @flow
>>> def my_flow():
>>> return "hello"
>>> my_flow(return_state=True).result()
hello
Get the result from a failed state
>>> @flow
>>> def my_flow():
>>> raise ValueError("oh no!")
>>> state = my_flow(return_state=True) # Error is wrapped in FAILED state
>>> state.result() # Raises `ValueError`
Get the result from a failed state without erroring
>>> @flow
>>> def my_flow():
>>> raise ValueError("oh no!")
>>> state = my_flow(return_state=True)
>>> result = state.result(raise_on_failure=False)
>>> print(result)
ValueError("oh no!")
Get the result from a flow state in an async context
>>> @flow
>>> async def my_flow():
>>> return "hello"
>>> state = await my_flow(return_state=True)
>>> await state.result()
hello
Get the result with raise_on_failure
from a flow run in a different memory space
>>> @flow
>>> async def my_flow():
>>> raise ValueError("oh no!")
>>> my_flow.deploy("my_deployment/my_flow")
>>> flow_run = run_deployment("my_deployment/my_flow")
>>> await flow_run.state.result(raise_on_failure=True) # Raises `ValueError("oh no!")`
Source code in src/prefect/client/schemas/objects.py
224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 |
|
to_state_create()
Convert this state to a StateCreate
type which can be used to set the state of
a run in the API.
This method will drop this state's data
if it is not a result type. Only
results should be sent to the API. Other data is only available locally.
Source code in src/prefect/client/schemas/objects.py
313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 |
|
Task
Bases: Generic[P, R]
A Prefect task definition.
Note
We recommend using the @task
decorator for most use-cases.
Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run.
To preserve the input and output types, we use the generic type variables P and R for "Parameters" and "Returns" respectively.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fn
|
Callable[P, R]
|
The function defining the task. |
required |
name
|
Optional[str]
|
An optional name for the task; if not provided, the name will be inferred from the given function. |
None
|
description
|
Optional[str]
|
An optional string description for the task. |
None
|
tags
|
Optional[Iterable[str]]
|
An optional set of tags to be associated with runs of this task. These
tags are combined with any tags defined by a |
None
|
version
|
Optional[str]
|
An optional string specifying the version of this task definition |
None
|
cache_policy
|
Union[CachePolicy, Type[NotSet]]
|
A cache policy that determines the level of caching for this task |
NotSet
|
cache_key_fn
|
Optional[Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]]
|
An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again. |
None
|
cache_expiration
|
Optional[timedelta]
|
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. |
None
|
task_run_name
|
Optional[Union[Callable[[], str], Callable[[Dict[str, Any]], str], str]]
|
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. |
None
|
retries
|
Optional[int]
|
An optional number of times to retry on task run failure. |
None
|
retry_delay_seconds
|
Optional[Union[float, int, List[float], Callable[[int], List[float]]]]
|
Optionally configures how long to wait before retrying the
task after failure. This is only applicable if |
None
|
retry_jitter_factor
|
Optional[float]
|
An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". |
None
|
persist_result
|
Optional[bool]
|
A toggle indicating whether the result of this task
should be persisted to result storage. Defaults to |
None
|
result_storage
|
Optional[ResultStorage]
|
An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in. |
None
|
result_storage_key
|
Optional[str]
|
An optional key to store the result in storage at when persisted. Defaults to a unique identifier. |
None
|
result_serializer
|
Optional[ResultSerializer]
|
An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in. |
None
|
timeout_seconds
|
Union[int, float, None]
|
An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed. |
None
|
log_prints
|
Optional[bool]
|
If set, |
False
|
refresh_cache
|
Optional[bool]
|
If set, cached results for the cache key are not used.
Defaults to |
None
|
on_failure
|
Optional[List[Callable[[Task, TaskRun, State], None]]]
|
An optional list of callables to run when the task enters a failed state. |
None
|
on_completion
|
Optional[List[Callable[[Task, TaskRun, State], None]]]
|
An optional list of callables to run when the task enters a completed state. |
None
|
on_commit
|
Optional[List[Callable[[Transaction], None]]]
|
An optional list of callables to run when the task's idempotency record is committed. |
None
|
on_rollback
|
Optional[List[Callable[[Transaction], None]]]
|
An optional list of callables to run when the task rolls back. |
None
|
retry_condition_fn
|
Optional[Callable[[Task, TaskRun, State], bool]]
|
An optional callable run when a task run returns a Failed state. Should
return |
None
|
viz_return_value
|
Optional[Any]
|
An optional value to return when the task dependency tree is visualized. |
None
|
Source code in src/prefect/tasks.py
226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 |
|
__call__(*args, return_state=False, wait_for=None, **kwargs)
__call__(*args: P.args, **kwargs: P.kwargs) -> None
__call__(*args: P.args, **kwargs: P.kwargs) -> T
__call__(*args: P.args, return_state: Literal[True], **kwargs: P.kwargs) -> State[T]
Run the task and return the result. If return_state
is True returns
the result is wrapped in a Prefect State which provides error handling.
Source code in src/prefect/tasks.py
973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 |
|
__get__(instance, owner)
Implement the descriptor protocol so that the task can be used as an instance method. When an instance method is loaded, this method is called with the "self" instance as an argument. We return a copy of the task with that instance bound to the task's function.
Source code in src/prefect/tasks.py
508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 |
|
apply_async(args=None, kwargs=None, wait_for=None, dependencies=None)
Create a pending task run for a task worker to execute.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
args
|
Optional[Tuple[Any, ...]]
|
Arguments to run the task with |
None
|
kwargs
|
Optional[Dict[str, Any]]
|
Keyword arguments to run the task with |
None
|
Returns:
Type | Description |
---|---|
PrefectDistributedFuture
|
A PrefectDistributedFuture object representing the pending task run |
Define a task
>>> from prefect import task
>>> @task
>>> def my_task(name: str = "world"):
>>> return f"hello {name}"
Create a pending task run for the task
>>> from prefect import flow
>>> @flow
>>> def my_flow():
>>> my_task.apply_async(("marvin",))
Wait for a task to finish
>>> @flow
>>> def my_flow():
>>> my_task.apply_async(("marvin",)).wait()
>>> @flow
>>> def my_flow():
>>> print(my_task.apply_async(("marvin",)).result())
>>>
>>> my_flow()
hello marvin
TODO: Enforce ordering between tasks that do not exchange data
>>> @task
>>> def task_1():
>>> pass
>>>
>>> @task
>>> def task_2():
>>> pass
>>>
>>> @flow
>>> def my_flow():
>>> x = task_1.apply_async()
>>>
>>> # task 2 will wait for task_1 to complete
>>> y = task_2.apply_async(wait_for=[x])
Source code in src/prefect/tasks.py
1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 |
|
delay(*args, **kwargs)
An alias for apply_async
with simpler calling semantics.
Avoids having to use explicit "args" and "kwargs" arguments. Arguments will pass through as-is to the task.
Examples:
Define a task
>>> from prefect import task
>>> @task
>>> def my_task(name: str = "world"):
>>> return f"hello {name}"
Create a pending task run for the task
>>> from prefect import flow
>>> @flow
>>> def my_flow():
>>> my_task.delay("marvin")
Wait for a task to finish
>>> @flow
>>> def my_flow():
>>> my_task.delay("marvin").wait()
Use the result from a task in a flow
>>> @flow
>>> def my_flow():
>>> print(my_task.delay("marvin").result())
>>>
>>> my_flow()
hello marvin
Source code in src/prefect/tasks.py
1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 |
|
map(*args, return_state=False, wait_for=None, deferred=False, **kwargs)
map(*args: Any, return_state: Literal[True], wait_for: Optional[Iterable[Union[PrefectFuture[T], T]]] = ..., deferred: bool = ..., **kwargs: Any) -> List[State[R]]
map(*args: Any, wait_for: Optional[Iterable[Union[PrefectFuture[T], T]]] = ..., deferred: bool = ..., **kwargs: Any) -> PrefectFutureList[R]
map(*args: Any, return_state: Literal[True], wait_for: Optional[Iterable[Union[PrefectFuture[T], T]]] = ..., deferred: bool = ..., **kwargs: Any) -> List[State[R]]
map(*args: Any, wait_for: Optional[Iterable[Union[PrefectFuture[T], T]]] = ..., deferred: bool = ..., **kwargs: Any) -> PrefectFutureList[R]
map(*args: Any, return_state: Literal[True], wait_for: Optional[Iterable[Union[PrefectFuture[T], T]]] = ..., deferred: bool = ..., **kwargs: Any) -> List[State[R]]
map(*args: Any, return_state: Literal[False], wait_for: Optional[Iterable[Union[PrefectFuture[T], T]]] = ..., deferred: bool = ..., **kwargs: Any) -> PrefectFutureList[R]
Submit a mapped run of the task to a worker.
Must be called within a flow run context. Will return a list of futures that should be waited on before exiting the flow context to ensure all mapped tasks have completed.
Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will receive the same value.
Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing.
This method is always synchronous, even if the underlying user function is asynchronous.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args
|
Any
|
Iterable and static arguments to run the tasks with |
()
|
return_state
|
bool
|
Return a list of Prefect States that wrap the results of each task run. |
False
|
wait_for
|
Optional[Iterable[Union[PrefectFuture[T], T]]]
|
Upstream task futures to wait for before starting the task |
None
|
**kwargs
|
Any
|
Keyword iterable arguments to run the task with |
{}
|
Returns:
Type | Description |
---|---|
A list of futures allowing asynchronous access to the state of the |
|
tasks |
Define a task
>>> from prefect import task
>>> @task
>>> def my_task(x):
>>> return x + 1
Create mapped tasks
>>> from prefect import flow
>>> @flow
>>> def my_flow():
>>> return my_task.map([1, 2, 3])
Wait for all mapped tasks to finish
>>> @flow
>>> def my_flow():
>>> futures = my_task.map([1, 2, 3])
>>> futures.wait():
>>> # Now all of the mapped tasks have finished
>>> my_task(10)
Use the result from mapped tasks in a flow
>>> @flow
>>> def my_flow():
>>> futures = my_task.map([1, 2, 3])
>>> for x in futures.result():
>>> print(x)
>>> my_flow()
2
3
4
Enforce ordering between tasks that do not exchange data
>>> @task
>>> def task_1(x):
>>> pass
>>>
>>> @task
>>> def task_2(y):
>>> pass
>>>
>>> @flow
>>> def my_flow():
>>> x = task_1.submit()
>>>
>>> # task 2 will wait for task_1 to complete
>>> y = task_2.map([1, 2, 3], wait_for=[x])
>>> return y
Use a non-iterable input as a constant across mapped tasks
>>> @task
>>> def display(prefix, item):
>>> print(prefix, item)
>>>
>>> @flow
>>> def my_flow():
>>> return display.map("Check it out: ", [1, 2, 3])
>>>
>>> my_flow()
Check it out: 1
Check it out: 2
Check it out: 3
Use `unmapped` to treat an iterable argument as a constant
>>> from prefect import unmapped
>>>
>>> @task
>>> def add_n_to_items(items, n):
>>> return [item + n for item in items]
>>>
>>> @flow
>>> def my_flow():
>>> return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])
>>>
>>> my_flow()
[[11, 21], [12, 22], [13, 23]]
Source code in src/prefect/tasks.py
1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 |
|
serve()
async
Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
task_runner
|
The task runner to use for serving the task. If not provided, the default task runner will be used. |
required |
Examples:
Serve a task using the default task runner
>>> @task
>>> def my_task():
>>> return 1
>>> my_task.serve()
Source code in src/prefect/tasks.py
1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 |
|
submit(*args, return_state=False, wait_for=None, **kwargs)
submit(*args: P.args, **kwargs: P.kwargs) -> PrefectFuture[NoReturn]
submit(*args: P.args, **kwargs: P.kwargs) -> PrefectFuture[T]
submit(*args: P.args, **kwargs: P.kwargs) -> PrefectFuture[T]
submit(*args: P.args, return_state: Literal[True], **kwargs: P.kwargs) -> State[T]
submit(*args: P.args, return_state: Literal[True], **kwargs: P.kwargs) -> State[T]
Submit a run of the task to the engine.
Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing.
This method is always synchronous, even if the underlying user function is asynchronous.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args
|
Any
|
Arguments to run the task with |
()
|
return_state
|
bool
|
Return the result of the flow run wrapped in a Prefect State. |
False
|
wait_for
|
Optional[Iterable[PrefectFuture]]
|
Upstream task futures to wait for before starting the task |
None
|
**kwargs
|
Any
|
Keyword arguments to run the task with |
{}
|
Returns:
Type | Description |
---|---|
If |
|
If |
Define a task
>>> from prefect import task
>>> @task
>>> def my_task():
>>> return "hello"
Run a task in a flow
>>> from prefect import flow
>>> @flow
>>> def my_flow():
>>> my_task.submit()
Wait for a task to finish
>>> @flow
>>> def my_flow():
>>> my_task.submit().wait()
Use the result from a task in a flow
>>> @flow
>>> def my_flow():
>>> print(my_task.submit().result())
>>>
>>> my_flow()
hello
Run an async task in an async flow
>>> @task
>>> async def my_async_task():
>>> pass
>>>
>>> @flow
>>> async def my_flow():
>>> my_async_task.submit()
Run a sync task in an async flow
>>> @flow
>>> async def my_flow():
>>> my_task.submit()
Enforce ordering between tasks that do not exchange data
>>> @task
>>> def task_1():
>>> pass
>>>
>>> @task
>>> def task_2():
>>> pass
>>>
>>> @flow
>>> def my_flow():
>>> x = task_1.submit()
>>>
>>> # task 2 will wait for task_1 to complete
>>> y = task_2.submit(wait_for=[x])
Source code in src/prefect/tasks.py
1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 |
|
with_options(*, name=None, description=None, tags=None, cache_policy=NotSet, cache_key_fn=None, task_run_name=NotSet, cache_expiration=None, retries=NotSet, retry_delay_seconds=NotSet, retry_jitter_factor=NotSet, persist_result=NotSet, result_storage=NotSet, result_serializer=NotSet, result_storage_key=NotSet, cache_result_in_memory=None, timeout_seconds=None, log_prints=NotSet, refresh_cache=NotSet, on_completion=None, on_failure=None, retry_condition_fn=None, viz_return_value=None)
Create a new task from the current object, updating provided options.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
Optional[str]
|
A new name for the task. |
None
|
description
|
Optional[str]
|
A new description for the task. |
None
|
tags
|
Optional[Iterable[str]]
|
A new set of tags for the task. If given, existing tags are ignored, not merged. |
None
|
cache_key_fn
|
Optional[Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]]
|
A new cache key function for the task. |
None
|
cache_expiration
|
Optional[timedelta]
|
A new cache expiration time for the task. |
None
|
task_run_name
|
Optional[Union[Callable[[], str], Callable[[Dict[str, Any]], str], str, Type[NotSet]]]
|
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. |
NotSet
|
retries
|
Union[int, Type[NotSet]]
|
A new number of times to retry on task run failure. |
NotSet
|
retry_delay_seconds
|
Union[float, int, List[float], Callable[[int], List[float]], Type[NotSet]]
|
Optionally configures how long to wait before retrying
the task after failure. This is only applicable if |
NotSet
|
retry_jitter_factor
|
Union[float, Type[NotSet]]
|
An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". |
NotSet
|
persist_result
|
Union[bool, Type[NotSet]]
|
A new option for enabling or disabling result persistence. |
NotSet
|
result_storage
|
Union[ResultStorage, Type[NotSet]]
|
A new storage type to use for results. |
NotSet
|
result_serializer
|
Union[ResultSerializer, Type[NotSet]]
|
A new serializer to use for results. |
NotSet
|
result_storage_key
|
Union[str, Type[NotSet]]
|
A new key for the persisted result to be stored at. |
NotSet
|
timeout_seconds
|
Union[int, float, None]
|
A new maximum time for the task to complete in seconds. |
None
|
log_prints
|
Union[bool, Type[NotSet]]
|
A new option for enabling or disabling redirection of |
NotSet
|
refresh_cache
|
Union[bool, Type[NotSet]]
|
A new option for enabling or disabling cache refresh. |
NotSet
|
on_completion
|
Optional[List[Callable[[Task, TaskRun, State], Union[Awaitable[None], None]]]]
|
A new list of callables to run when the task enters a completed state. |
None
|
on_failure
|
Optional[List[Callable[[Task, TaskRun, State], Union[Awaitable[None], None]]]]
|
A new list of callables to run when the task enters a failed state. |
None
|
retry_condition_fn
|
Optional[Callable[[Task, TaskRun, State], bool]]
|
An optional callable run when a task run returns a Failed state.
Should return |
None
|
viz_return_value
|
Optional[Any]
|
An optional value to return when the task dependency tree is visualized. |
None
|
Returns:
Type | Description |
---|---|
A new |
Create a new task from an existing task and update the name
>>> @task(name="My task")
>>> def my_task():
>>> return 1
>>>
>>> new_task = my_task.with_options(name="My new task")
Create a new task from an existing task and update the retry settings
>>> from random import randint
>>>
>>> @task(retries=1, retry_delay_seconds=5)
>>> def my_task():
>>> x = randint(0, 5)
>>> if x >= 3: # Make a task that fails sometimes
>>> raise ValueError("Retry me please!")
>>> return x
>>>
>>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)
Use a task with updated options within a flow
>>> @task(name="My task")
>>> def my_task():
>>> return 1
>>>
>>> @flow
>>> my_flow():
>>> new_task = my_task.with_options(name="My new task")
>>> new_task()
Source code in src/prefect/tasks.py
526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 |
|
Transaction
Bases: ContextModel
A base model for transaction state.
Source code in src/prefect/transactions.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 |
|
get(name, default=NotSet)
Get a stored value from the transaction.
Child transactions will return values from their parents unless a value with the same name is set in the child transaction.
Direct changes to returned values will not update the stored value. To update the
stored value, use the set
method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
The name of the value to get |
required |
default
|
Any
|
The default value to return if the value is not found |
NotSet
|
Returns:
Type | Description |
---|---|
Any
|
The value from the transaction |
Examples:
Get a value from the transaction:
with transaction() as txn:
txn.set("key", "value")
...
assert txn.get("key") == "value"
Get a value from a parent transaction:
with transaction() as parent:
parent.set("key", "parent_value")
with transaction() as child:
assert child.get("key") == "parent_value"
Update a stored value:
with transaction() as txn:
txn.set("key", [1, 2, 3])
value = txn.get("key")
value.append(4)
# Stored value is not updated until `.set` is called
assert value == [1, 2, 3, 4]
assert txn.get("key") == [1, 2, 3]
txn.set("key", value)
assert txn.get("key") == [1, 2, 3, 4]
Source code in src/prefect/transactions.py
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 |
|
set(name, value)
Set a stored value in the transaction.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
The name of the value to set |
required |
value
|
Any
|
The value to set |
required |
Examples:
Set a value for use later in the transaction:
with transaction() as txn:
txn.set("key", "value")
...
assert txn.get("key") == "value"
Source code in src/prefect/transactions.py
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
|
stage(value, on_rollback_hooks=None, on_commit_hooks=None)
Stage a value to be committed later.
Source code in src/prefect/transactions.py
379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 |
|
allow_failure
Bases: BaseAnnotation[T]
Wrapper for states or futures.
Indicates that the upstream run for this input can be failed.
Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream.
If the input is from a failed run, the attached exception will be passed to your function.
Source code in src/prefect/utilities/annotations.py
46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
unmapped
Bases: BaseAnnotation[T]
Wrapper for iterables.
Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split.
Source code in src/prefect/utilities/annotations.py
33 34 35 36 37 38 39 40 41 42 43 |
|
deploy(*deployments, work_pool_name=None, image=None, build=True, push=True, print_next_steps_message=True, ignore_warnings=False)
async
Deploy the provided list of deployments to dynamic infrastructure via a work pool.
By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule.
If you want to use an existing image, you can pass build=False
to skip building and pushing
an image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*deployments
|
RunnerDeployment
|
A list of deployments to deploy. |
()
|
work_pool_name
|
Optional[str]
|
The name of the work pool to use for these deployments. Defaults to
the value of |
None
|
image
|
Optional[Union[str, DockerImage]]
|
The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. |
None
|
build
|
bool
|
Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. |
True
|
push
|
bool
|
Whether or not to skip pushing the built image to a registry. |
True
|
print_next_steps_message
|
bool
|
Whether or not to print a message with next steps after deploying the deployments. |
True
|
Returns:
Type | Description |
---|---|
List[UUID]
|
A list of deployment IDs for the created/updated deployments. |
Examples:
Deploy a group of flows to a work pool:
from prefect import deploy, flow
@flow(log_prints=True)
def local_flow():
print("I'm a locally defined flow!")
if __name__ == "__main__":
deploy(
local_flow.to_deployment(name="example-deploy-local-flow"),
flow.from_source(
source="https://github.com/org/repo.git",
entrypoint="flows.py:my_flow",
).to_deployment(
name="example-deploy-remote-flow",
),
work_pool_name="my-work-pool",
image="my-registry/my-image:dev",
)
Source code in src/prefect/deployments/runner.py
788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 |
|
flow(__fn=None, *, name=None, version=None, flow_run_name=None, retries=None, retry_delay_seconds=None, task_runner=None, description=None, timeout_seconds=None, validate_parameters=True, persist_result=None, result_storage=None, result_serializer=None, cache_result_in_memory=True, log_prints=None, on_completion=None, on_failure=None, on_cancellation=None, on_crashed=None, on_running=None)
flow(__fn: Callable[P, R]) -> Flow[P, R]
flow(*, name: Optional[str] = None, version: Optional[str] = None, flow_run_name: Optional[Union[Callable[[], str], str]] = None, retries: Optional[int] = None, retry_delay_seconds: Optional[Union[int, float]] = None, task_runner: Optional[TaskRunner] = None, description: Optional[str] = None, timeout_seconds: Union[int, float, None] = None, validate_parameters: bool = True, persist_result: Optional[bool] = None, result_storage: Optional[ResultStorage] = None, result_serializer: Optional[ResultSerializer] = None, cache_result_in_memory: bool = True, log_prints: Optional[bool] = None, on_completion: Optional[List[Callable[[FlowSchema, FlowRun, State], Union[Awaitable[None], None]]]] = None, on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], Union[Awaitable[None], None]]]] = None, on_cancellation: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None, on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None, on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None) -> Callable[[Callable[P, R]], Flow[P, R]]
Decorator to designate a function as a Prefect workflow.
This decorator may be used for asynchronous or synchronous functions.
Flow parameters must be serializable by Pydantic.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
Optional[str]
|
An optional name for the flow; if not provided, the name will be inferred from the given function. |
None
|
version
|
Optional[str]
|
An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null. |
None
|
flow_run_name
|
Optional[Union[Callable[[], str], str]]
|
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string. |
None
|
retries
|
Optional[int]
|
An optional number of times to retry on flow run failure. |
None
|
retry_delay_seconds
|
Union[int, float, None]
|
An optional number of seconds to wait before retrying the
flow after failure. This is only applicable if |
None
|
task_runner
|
Optional[TaskRunner]
|
An optional task runner to use for task execution within the flow; if
not provided, a |
None
|
description
|
Optional[str]
|
An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function. |
None
|
timeout_seconds
|
Union[int, float, None]
|
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. |
None
|
validate_parameters
|
bool
|
By default, parameters passed to flows are validated by
Pydantic. This will check that input values conform to the annotated types
on the function. Where possible, values will be coerced into the correct
type; for example, if a parameter is defined as |
True
|
persist_result
|
Optional[bool]
|
An optional toggle indicating whether the result of this flow
should be persisted to result storage. Defaults to |
None
|
result_storage
|
Optional[ResultStorage]
|
An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow. |
None
|
result_serializer
|
Optional[ResultSerializer]
|
An optional serializer to use to serialize the result of this
flow for persistence. This value will be used as the default for any tasks
in this flow. If not provided, the value of |
None
|
cache_result_in_memory
|
bool
|
An optional toggle indicating whether the cached result of
a running the flow should be stored in memory. Defaults to |
True
|
log_prints
|
Optional[bool]
|
If set, |
None
|
on_completion
|
Optional[List[Callable[[Flow, FlowRun, State], Union[Awaitable[None], None]]]]
|
An optional list of functions to call when the flow run is completed. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run. |
None
|
on_failure
|
Optional[List[Callable[[Flow, FlowRun, State], Union[Awaitable[None], None]]]]
|
An optional list of functions to call when the flow run fails. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run. |
None
|
on_cancellation
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of functions to call when the flow run is cancelled. These functions will be passed the flow, flow run, and final state. |
None
|
on_crashed
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of functions to call when the flow run crashes. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run. |
None
|
on_running
|
Optional[List[Callable[[Flow, FlowRun, State], None]]]
|
An optional list of functions to call when the flow run is started. Each function should accept three arguments: the flow, the flow run, and the current state |
None
|
Returns:
Type | Description |
---|---|
A callable |
|
final state. |
Examples:
Define a simple flow
>>> from prefect import flow
>>> @flow
>>> def add(x, y):
>>> return x + y
Define an async flow
>>> @flow
>>> async def add(x, y):
>>> return x + y
Define a flow with a version and description
>>> @flow(version="first-flow", description="This flow is empty!")
>>> def my_flow():
>>> pass
Define a flow with a custom name
>>> @flow(name="The Ultimate Flow")
>>> def my_flow():
>>> pass
Define a flow that submits its tasks to dask
>>> from prefect_dask.task_runners import DaskTaskRunner
>>>
>>> @flow(task_runner=DaskTaskRunner)
>>> def my_flow():
>>> pass
Source code in src/prefect/flows.py
1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 |
|
get_client(httpx_settings=None, sync_client=False)
get_client(httpx_settings: Optional[Dict[str, Any]] = None, sync_client: Literal[False] = False) -> PrefectClient
get_client(httpx_settings: Optional[Dict[str, Any]] = None, sync_client: Literal[True] = True) -> SyncPrefectClient
Retrieve a HTTP client for communicating with the Prefect REST API.
The client must be context managed; for example:
async with get_client() as client:
await client.hello()
To return a synchronous client, pass sync_client=True:
with get_client(sync_client=True) as client:
client.hello()
Source code in src/prefect/client/orchestration.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 |
|
get_run_logger(context=None, **kwargs)
Get a Prefect logger for the current task run or flow run.
The logger will be named either prefect.task_runs
or prefect.flow_runs
.
Contextual data about the run will be attached to the log records.
These loggers are connected to the APILogHandler
by default to send log records to
the API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
context
|
Optional[RunContext]
|
A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed. |
None
|
**kwargs
|
str
|
Additional keyword arguments will be attached to the log records in addition to the run metadata |
{}
|
Raises:
Type | Description |
---|---|
MissingContextError
|
If no context can be found |
Source code in src/prefect/logging/loggers.py
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
|
pause_flow_run(wait_for_input=None, timeout=3600, poll_interval=10, key=None)
async
pause_flow_run(wait_for_input: None = None, timeout: int = 3600, poll_interval: int = 10, key: Optional[str] = None) -> None
pause_flow_run(wait_for_input: Type[T], timeout: int = 3600, poll_interval: int = 10, key: Optional[str] = None) -> T
Pauses the current flow run by blocking execution until resumed.
When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
timeout
|
int
|
the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. |
3600
|
poll_interval
|
int
|
The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds. |
10
|
key
|
Optional[str]
|
An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the "reschedule" option from running the same pause twice. A custom key can be supplied for custom pausing behavior. |
None
|
wait_for_input
|
Optional[Type[T]]
|
a subclass of |
None
|
@task
def task_one():
for i in range(3):
sleep(1)
@flow
def my_flow():
terminal_state = task_one.submit(return_state=True)
if terminal_state.type == StateType.COMPLETED:
print("Task one succeeded! Pausing flow run..")
pause_flow_run(timeout=2)
else:
print("Task one failed. Skipping pause flow run..")
Source code in src/prefect/flow_runs.py
156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
|
resume_flow_run(flow_run_id, run_input=None)
async
Resumes a paused flow.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flow_run_id
|
the flow_run_id to resume |
required | |
run_input
|
Optional[Dict]
|
a dictionary of inputs to provide to the flow run. |
None
|
Source code in src/prefect/flow_runs.py
432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 |
|
serve(*args, pause_on_shutdown=True, print_starting_message=True, limit=None, **kwargs)
Serve the provided list of deployments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args
|
RunnerDeployment
|
A list of deployments to serve. |
()
|
pause_on_shutdown
|
bool
|
A boolean for whether or not to automatically pause deployment schedules on shutdown. |
True
|
print_starting_message
|
bool
|
Whether or not to print message to the console on startup. |
True
|
limit
|
Optional[int]
|
The maximum number of runs that can be executed concurrently. |
None
|
**kwargs
|
Additional keyword arguments to pass to the runner. |
{}
|
Examples:
Prepare two deployments and serve them:
import datetime
from prefect import flow, serve
@flow
def my_flow(name):
print(f"hello {name}")
@flow
def my_other_flow(name):
print(f"goodbye {name}")
if __name__ == "__main__":
# Run once a day
hello_deploy = my_flow.to_deployment(
"hello", tags=["dev"], interval=datetime.timedelta(days=1)
)
# Run every Sunday at 4:00 AM
bye_deploy = my_other_flow.to_deployment(
"goodbye", tags=["dev"], cron="0 4 * * sun"
)
serve(hello_deploy, bye_deploy)
Source code in src/prefect/flows.py
1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 |
|
suspend_flow_run(wait_for_input=None, flow_run_id=None, timeout=3600, key=None, client=None)
async
suspend_flow_run(wait_for_input: None = None, flow_run_id: Optional[UUID] = None, timeout: Optional[int] = 3600, key: Optional[str] = None, client: PrefectClient = None) -> None
suspend_flow_run(wait_for_input: Type[T], flow_run_id: Optional[UUID] = None, timeout: Optional[int] = 3600, key: Optional[str] = None, client: PrefectClient = None) -> T
Suspends a flow run by stopping code execution until resumed.
When suspended, the flow run will continue execution until the NEXT task is
orchestrated, at which point the flow will exit. Any tasks that have
already started will run until completion. When resumed, the flow run will
be rescheduled to finish execution. In order suspend a flow run in this
way, the flow needs to have an associated deployment and results need to be
configured with the persist_result
option.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flow_run_id
|
Optional[UUID]
|
a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run. |
None
|
timeout
|
Optional[int]
|
the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. |
3600
|
key
|
Optional[str]
|
An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior. |
None
|
wait_for_input
|
Optional[Type[T]]
|
a subclass of |
None
|
Source code in src/prefect/flow_runs.py
326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 |
|
tags(*new_tags)
Context manager to add tags to flow and task run calls.
Tags are always combined with any existing tags.
Yields:
Type | Description |
---|---|
Set[str]
|
The current set of tags |
Examples:
>>> from prefect import tags, task, flow
>>> @task
>>> def my_task():
>>> pass
Run a task with tags
>>> @flow
>>> def my_flow():
>>> with tags("a", "b"):
>>> my_task() # has tags: a, b
Run a flow with tags
>>> @flow
>>> def my_flow():
>>> pass
>>> with tags("a", "b"):
>>> my_flow() # has tags: a, b
Run a task with nested tag contexts
>>> @flow
>>> def my_flow():
>>> with tags("a", "b"):
>>> with tags("c", "d"):
>>> my_task() # has tags: a, b, c, d
>>> my_task() # has tags: a, b
Inspect the current tags
>>> @flow
>>> def my_flow():
>>> with tags("c", "d"):
>>> with tags("e", "f") as current_tags:
>>> print(current_tags)
>>> with tags("a", "b"):
>>> my_flow()
{"a", "b", "c", "d", "e", "f"}
Source code in src/prefect/context.py
506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 |
|
task(__fn=None, *, name=None, description=None, tags=None, version=None, cache_policy=NotSet, cache_key_fn=None, cache_expiration=None, task_run_name=None, retries=None, retry_delay_seconds=None, retry_jitter_factor=None, persist_result=None, result_storage=None, result_storage_key=None, result_serializer=None, cache_result_in_memory=True, timeout_seconds=None, log_prints=None, refresh_cache=None, on_completion=None, on_failure=None, retry_condition_fn=None, viz_return_value=None)
task(__fn: Callable[P, R]) -> Task[P, R]
task(*, name: Optional[str] = None, description: Optional[str] = None, tags: Optional[Iterable[str]] = None, version: Optional[str] = None, cache_policy: Union[CachePolicy, Type[NotSet]] = NotSet, cache_key_fn: Optional[Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]] = None, cache_expiration: Optional[datetime.timedelta] = None, task_run_name: Optional[Union[Callable[[], str], Callable[[Dict[str, Any]], str], str]] = None, retries: int = 0, retry_delay_seconds: Union[float, int, List[float], Callable[[int], List[float]]] = 0, retry_jitter_factor: Optional[float] = None, persist_result: Optional[bool] = None, result_storage: Optional[ResultStorage] = None, result_storage_key: Optional[str] = None, result_serializer: Optional[ResultSerializer] = None, cache_result_in_memory: bool = True, timeout_seconds: Union[int, float, None] = None, log_prints: Optional[bool] = None, refresh_cache: Optional[bool] = None, on_completion: Optional[List[Callable[[Task, TaskRun, State], None]]] = None, on_failure: Optional[List[Callable[[Task, TaskRun, State], None]]] = None, retry_condition_fn: Optional[Callable[[Task, TaskRun, State], bool]] = None, viz_return_value: Any = None) -> Callable[[Callable[P, R]], Task[P, R]]
Decorator to designate a function as a task in a Prefect workflow.
This decorator may be used for asynchronous or synchronous functions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
Optional[str]
|
An optional name for the task; if not provided, the name will be inferred from the given function. |
None
|
description
|
Optional[str]
|
An optional string description for the task. |
None
|
tags
|
Optional[Iterable[str]]
|
An optional set of tags to be associated with runs of this task. These
tags are combined with any tags defined by a |
None
|
version
|
Optional[str]
|
An optional string specifying the version of this task definition |
None
|
cache_key_fn
|
Union[Callable[[TaskRunContext, Dict[str, Any]], Optional[str]], None]
|
An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again. |
None
|
cache_expiration
|
Optional[timedelta]
|
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. |
None
|
task_run_name
|
Optional[Union[Callable[[], str], Callable[[Dict[str, Any]], str], str]]
|
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. |
None
|
retries
|
Optional[int]
|
An optional number of times to retry on task run failure |
None
|
retry_delay_seconds
|
Union[float, int, List[float], Callable[[int], List[float]], None]
|
Optionally configures how long to wait before retrying the
task after failure. This is only applicable if |
None
|
retry_jitter_factor
|
Optional[float]
|
An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". |
None
|
persist_result
|
Optional[bool]
|
A toggle indicating whether the result of this task
should be persisted to result storage. Defaults to |
None
|
result_storage
|
Optional[ResultStorage]
|
An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in. |
None
|
result_storage_key
|
Optional[str]
|
An optional key to store the result in storage at when persisted. Defaults to a unique identifier. |
None
|
result_serializer
|
Optional[ResultSerializer]
|
An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in. |
None
|
timeout_seconds
|
Union[int, float, None]
|
An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed. |
None
|
log_prints
|
Optional[bool]
|
If set, |
None
|
refresh_cache
|
Optional[bool]
|
If set, cached results for the cache key are not used.
Defaults to |
None
|
on_failure
|
Optional[List[Callable[[Task, TaskRun, State], None]]]
|
An optional list of callables to run when the task enters a failed state. |
None
|
on_completion
|
Optional[List[Callable[[Task, TaskRun, State], None]]]
|
An optional list of callables to run when the task enters a completed state. |
None
|
retry_condition_fn
|
Optional[Callable[[Task, TaskRun, State], bool]]
|
An optional callable run when a task run returns a Failed state. Should
return |
None
|
viz_return_value
|
Any
|
An optional value to return when the task dependency tree is visualized. |
None
|
Returns:
Type | Description |
---|---|
A callable |
Examples:
Define a simple task
>>> @task
>>> def add(x, y):
>>> return x + y
Define an async task
>>> @task
>>> async def add(x, y):
>>> return x + y
Define a task with tags and a description
>>> @task(tags={"a", "b"}, description="This task is empty but its my first!")
>>> def my_task():
>>> pass
Define a task with a custom name
>>> @task(name="The Ultimate Task")
>>> def my_task():
>>> pass
Define a task that retries 3 times with a 5 second delay between attempts
>>> from random import randint
>>>
>>> @task(retries=3, retry_delay_seconds=5)
>>> def my_task():
>>> x = randint(0, 5)
>>> if x >= 3: # Make a task that fails sometimes
>>> raise ValueError("Retry me please!")
>>> return x
Define a task that is cached for a day based on its inputs
>>> from prefect.tasks import task_input_hash
>>> from datetime import timedelta
>>>
>>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))
>>> def my_task():
>>> return "hello"
Source code in src/prefect/tasks.py
1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 |
|